Refactor MacroAssembler interfaces to differentiate the pointer operands from...

Refactor MacroAssembler interfaces to differentiate the pointer operands from the 64-bit integer operands
https://bugs.webkit.org/show_bug.cgi?id=99154

Reviewed by Gavin Barraclough.

In current JavaScriptCore implementation for JSVALUE64 platform (i.e.,
the X64 platform), we assume that the JSValue size is same to the
pointer size, and thus EncodedJSValue is simply type defined as a
"void*". In the JIT compiler, we also take this assumption and invoke
the same macro assembler interfaces for both JSValue and pointer
operands. We need to differentiate the operations on pointers from the
operations on JSValues, and let them invoking different macro
assembler interfaces. For example, we now use the interface of
"loadPtr" to load either a pointer or a JSValue, and we need to switch
to using "loadPtr" to load a pointer and some new "load64" interface
to load a JSValue. This would help us supporting other JSVALUE64
platforms where pointer size is not necessarily 64-bits, for example
x32 (bug #99153).

The major modification I made is to introduce the "*64" interfaces in
the MacroAssembler for those operations on JSValues, keep the "*Ptr"
interfaces for those operations on real pointers, and go through all
the JIT compiler code to correct the usage.

This is the second part of the work, i.e, to correct the usage of the
new MacroAssembler interfaces in the JIT compilers, which also means
that now EncodedJSValue is defined as a 64-bit integer, and the "*64"
interfaces are used for it.

* assembler/MacroAssembler.h: JSValue immediates should be in Imm64 instead of ImmPtr.
(MacroAssembler):
(JSC::MacroAssembler::shouldBlind):
* dfg/DFGAssemblyHelpers.cpp: Correct the JIT compilers usage of the new interfaces.
(JSC::DFG::AssemblyHelpers::jitAssertIsInt32):
(JSC::DFG::AssemblyHelpers::jitAssertIsJSInt32):
(JSC::DFG::AssemblyHelpers::jitAssertIsJSNumber):
(JSC::DFG::AssemblyHelpers::jitAssertIsJSDouble):
(JSC::DFG::AssemblyHelpers::jitAssertIsCell):
* dfg/DFGAssemblyHelpers.h:
(JSC::DFG::AssemblyHelpers::emitPutToCallFrameHeader):
(JSC::DFG::AssemblyHelpers::branchIfNotCell):
(JSC::DFG::AssemblyHelpers::debugCall):
(JSC::DFG::AssemblyHelpers::boxDouble):
(JSC::DFG::AssemblyHelpers::unboxDouble):
(JSC::DFG::AssemblyHelpers::emitExceptionCheck):
* dfg/DFGCCallHelpers.h:
(JSC::DFG::CCallHelpers::setupArgumentsWithExecState):
(CCallHelpers):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGRepatch.cpp:
(JSC::DFG::generateProtoChainAccessStub):
(JSC::DFG::tryCacheGetByID):
(JSC::DFG::tryBuildGetByIDList):
(JSC::DFG::emitPutReplaceStub):
(JSC::DFG::emitPutTransitionStub):
* dfg/DFGScratchRegisterAllocator.h:
(JSC::DFG::ScratchRegisterAllocator::preserveUsedRegistersToScratchBuffer):
(JSC::DFG::ScratchRegisterAllocator::restoreUsedRegistersFromScratchBuffer):
* dfg/DFGSilentRegisterSavePlan.h:
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::checkArgumentTypes):
(JSC::DFG::SpeculativeJIT::compileValueToInt32):
(JSC::DFG::SpeculativeJIT::compileInt32ToDouble):
(JSC::DFG::SpeculativeJIT::compileInstanceOfForObject):
(JSC::DFG::SpeculativeJIT::compileInstanceOf):
(JSC::DFG::SpeculativeJIT::compileStrictEqForConstant):
(JSC::DFG::SpeculativeJIT::compileGetByValOnArguments):
* dfg/DFGSpeculativeJIT.h:
(SpeculativeJIT):
(JSC::DFG::SpeculativeJIT::silentSavePlanForGPR):
(JSC::DFG::SpeculativeJIT::silentSpill):
(JSC::DFG::SpeculativeJIT::silentFill):
(JSC::DFG::SpeculativeJIT::spill):
(JSC::DFG::SpeculativeJIT::valueOfJSConstantAsImm64):
(JSC::DFG::SpeculativeJIT::callOperation):
(JSC::DFG::SpeculativeJIT::branch64):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::fillInteger):
(JSC::DFG::SpeculativeJIT::fillDouble):
(JSC::DFG::SpeculativeJIT::fillJSValue):
(JSC::DFG::SpeculativeJIT::nonSpeculativeValueToNumber):
(JSC::DFG::SpeculativeJIT::nonSpeculativeValueToInt32):
(JSC::DFG::SpeculativeJIT::nonSpeculativeUInt32ToNumber):
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
(JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompareNull):
(JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranchNull):
(JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeBranch):
(JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeCompare):
(JSC::DFG::SpeculativeJIT::nonSpeculativePeepholeStrictEq):
(JSC::DFG::SpeculativeJIT::nonSpeculativeNonPeepholeStrictEq):
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::fillSpeculateIntInternal):
(JSC::DFG::SpeculativeJIT::fillSpeculateDouble):
(JSC::DFG::SpeculativeJIT::fillSpeculateCell):
(JSC::DFG::SpeculativeJIT::fillSpeculateBoolean):
(JSC::DFG::SpeculativeJIT::convertToDouble):
(JSC::DFG::SpeculativeJIT::compileObjectEquality):
(JSC::DFG::SpeculativeJIT::compileObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compilePeepHoleObjectToObjectOrOtherEquality):
(JSC::DFG::SpeculativeJIT::compileDoubleCompare):
(JSC::DFG::SpeculativeJIT::compileNonStringCellOrOtherLogicalNot):
(JSC::DFG::SpeculativeJIT::compileLogicalNot):
(JSC::DFG::SpeculativeJIT::emitNonStringCellOrOtherBranch):
(JSC::DFG::SpeculativeJIT::emitBranch):
(JSC::DFG::SpeculativeJIT::compileContiguousGetByVal):
(JSC::DFG::SpeculativeJIT::compileArrayStorageGetByVal):
(JSC::DFG::SpeculativeJIT::compileContiguousPutByVal):
(JSC::DFG::SpeculativeJIT::compileArrayStoragePutByVal):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitGenerationThunkGenerator):
(JSC::DFG::throwExceptionFromCallSlowPathGenerator):
(JSC::DFG::slowPathFor):
(JSC::DFG::virtualForThunkGenerator):
* interpreter/Interpreter.cpp:
(JSC::Interpreter::dumpRegisters):
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
* jit/JIT.h:
(JIT):
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_negate):
(JSC::JIT::emitSlow_op_negate):
(JSC::JIT::emit_op_rshift):
(JSC::JIT::emitSlow_op_urshift):
(JSC::JIT::emit_compareAndJumpSlow):
(JSC::JIT::emit_op_bitand):
(JSC::JIT::compileBinaryArithOpSlowCase):
(JSC::JIT::emit_op_div):
* jit/JITCall.cpp:
(JSC::JIT::compileLoadVarargs):
(JSC::JIT::compileCallEval):
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
* jit/JITInlineMethods.h: Have some clean-up work as well.
(JSC):
(JSC::JIT::emitPutCellToCallFrameHeader):
(JSC::JIT::emitPutIntToCallFrameHeader):
(JSC::JIT::emitPutToCallFrameHeader):
(JSC::JIT::emitGetFromCallFrameHeader32):
(JSC::JIT::emitGetFromCallFrameHeader64):
(JSC::JIT::emitAllocateJSArray):
(JSC::JIT::emitValueProfilingSite):
(JSC::JIT::emitGetJITStubArg):
(JSC::JIT::emitGetVirtualRegister):
(JSC::JIT::emitPutVirtualRegister):
(JSC::JIT::emitInitRegister):
(JSC::JIT::emitJumpIfJSCell):
(JSC::JIT::emitJumpIfBothJSCells):
(JSC::JIT::emitJumpIfNotJSCell):
(JSC::JIT::emitLoadInt32ToDouble):
(JSC::JIT::emitJumpIfImmediateInteger):
(JSC::JIT::emitJumpIfNotImmediateInteger):
(JSC::JIT::emitJumpIfNotImmediateIntegers):
(JSC::JIT::emitFastArithReTagImmediate):
(JSC::JIT::emitFastArithIntToImmNoCheck):
* jit/JITOpcodes.cpp:
(JSC::JIT::privateCompileCTINativeCall):
(JSC::JIT::emit_op_mov):
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emit_op_is_undefined):
(JSC::JIT::emit_op_is_boolean):
(JSC::JIT::emit_op_is_number):
(JSC::JIT::emit_op_tear_off_activation):
(JSC::JIT::emit_op_not):
(JSC::JIT::emit_op_jfalse):
(JSC::JIT::emit_op_jeq_null):
(JSC::JIT::emit_op_jneq_null):
(JSC::JIT::emit_op_jtrue):
(JSC::JIT::emit_op_bitxor):
(JSC::JIT::emit_op_bitor):
(JSC::JIT::emit_op_get_pnames):
(JSC::JIT::emit_op_next_pname):
(JSC::JIT::compileOpStrictEq):
(JSC::JIT::emit_op_catch):
(JSC::JIT::emit_op_throw_reference_error):
(JSC::JIT::emit_op_eq_null):
(JSC::JIT::emit_op_neq_null):
(JSC::JIT::emit_op_create_activation):
(JSC::JIT::emit_op_create_arguments):
(JSC::JIT::emit_op_init_lazy_reg):
(JSC::JIT::emitSlow_op_convert_this):
(JSC::JIT::emitSlow_op_not):
(JSC::JIT::emit_op_get_argument_by_val):
(JSC::JIT::emit_op_put_to_base):
(JSC::JIT::emit_resolve_operations):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitContiguousGetByVal):
(JSC::JIT::emitArrayStorageGetByVal):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::compileGetDirectOffset):
(JSC::JIT::emit_op_get_by_pname):
(JSC::JIT::emitContiguousPutByVal):
(JSC::JIT::emitArrayStoragePutByVal):
(JSC::JIT::compileGetByIdHotPath):
(JSC::JIT::emit_op_put_by_id):
(JSC::JIT::compilePutDirectOffset):
(JSC::JIT::emit_op_init_global_const):
(JSC::JIT::emit_op_init_global_const_check):
(JSC::JIT::emitIntTypedArrayGetByVal):
(JSC::JIT::emitFloatTypedArrayGetByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):
* jit/JITStubCall.h:
(JITStubCall):
(JSC::JITStubCall::JITStubCall):
(JSC::JITStubCall::addArgument):
(JSC::JITStubCall::call):
(JSC::JITStubCall::callWithValueProfiling):
* jit/JSInterfaceJIT.h:
(JSC::JSInterfaceJIT::emitJumpIfImmediateNumber):
(JSC::JSInterfaceJIT::emitJumpIfNotImmediateNumber):
(JSC::JSInterfaceJIT::emitLoadJSCell):
(JSC::JSInterfaceJIT::emitLoadInt32):
(JSC::JSInterfaceJIT::emitLoadDouble):
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::returnDouble):
(JSC::SpecializedThunkJIT::tagReturnAsInt32):
* runtime/JSValue.cpp:
(JSC::JSValue::description):
* runtime/JSValue.h: Define JSVALUE64 EncodedJSValue as int64_t, which is also unified with JSVALUE32_64.
(JSC):
* runtime/JSValueInlineMethods.h: New implementation of some JSValue methods to make them more conformant
with the new rule that "JSValue is a 64-bit integer rather than a pointer" for JSVALUE64 platforms.
(JSC):
(JSC::JSValue::JSValue):
(JSC::JSValue::operator bool):
(JSC::JSValue::operator==):
(JSC::JSValue::operator!=):
(JSC::reinterpretDoubleToInt64):
(JSC::reinterpretInt64ToDouble):
(JSC::JSValue::asDouble):


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@131858 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 0b5ec2d8
This diff is collapsed.
......@@ -728,16 +728,6 @@ public:
return store64WithAddressOffsetPatch(src, address);
}
void movePtrToDouble(RegisterID src, FPRegisterID dest)
{
move64ToDouble(src, dest);
}
void moveDoubleToPtr(FPRegisterID src, RegisterID dest)
{
moveDoubleTo64(src, dest);
}
void comparePtr(RelationalCondition cond, RegisterID left, TrustedImm32 right, RegisterID dest)
{
compare64(cond, left, right, dest);
......@@ -895,14 +885,6 @@ public:
default: {
if (value <= 0xff)
return false;
JSValue jsValue = JSValue::decode(reinterpret_cast<void*>(value));
if (jsValue.isInt32())
return shouldBlind(Imm32(jsValue.asInt32()));
if (jsValue.isDouble() && !shouldBlindDouble(jsValue.asDouble()))
return false;
if (!shouldBlindDouble(bitwise_cast<double>(value)))
return false;
}
}
return shouldBlindForSpecificArch(value);
......@@ -956,6 +938,14 @@ public:
default: {
if (value <= 0xff)
return false;
JSValue jsValue = JSValue::decode(value);
if (jsValue.isInt32())
return shouldBlind(Imm32(jsValue.asInt32()));
if (jsValue.isDouble() && !shouldBlindDouble(jsValue.asDouble()))
return false;
if (!shouldBlindDouble(bitwise_cast<double>(value)))
return false;
}
}
return shouldBlindForSpecificArch(value);
......
......@@ -73,7 +73,7 @@ void AssemblyHelpers::clearSamplingFlag(int32_t flag)
void AssemblyHelpers::jitAssertIsInt32(GPRReg gpr)
{
#if CPU(X86_64)
Jump checkInt32 = branchPtr(BelowOrEqual, gpr, TrustedImmPtr(reinterpret_cast<void*>(static_cast<uintptr_t>(0xFFFFFFFFu))));
Jump checkInt32 = branch64(BelowOrEqual, gpr, TrustedImm64(static_cast<uintptr_t>(0xFFFFFFFFu)));
breakpoint();
checkInt32.link(this);
#else
......@@ -83,22 +83,22 @@ void AssemblyHelpers::jitAssertIsInt32(GPRReg gpr)
void AssemblyHelpers::jitAssertIsJSInt32(GPRReg gpr)
{
Jump checkJSInt32 = branchPtr(AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
Jump checkJSInt32 = branch64(AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
breakpoint();
checkJSInt32.link(this);
}
void AssemblyHelpers::jitAssertIsJSNumber(GPRReg gpr)
{
Jump checkJSNumber = branchTestPtr(MacroAssembler::NonZero, gpr, GPRInfo::tagTypeNumberRegister);
Jump checkJSNumber = branchTest64(MacroAssembler::NonZero, gpr, GPRInfo::tagTypeNumberRegister);
breakpoint();
checkJSNumber.link(this);
}
void AssemblyHelpers::jitAssertIsJSDouble(GPRReg gpr)
{
Jump checkJSInt32 = branchPtr(AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
Jump checkJSNumber = branchTestPtr(MacroAssembler::NonZero, gpr, GPRInfo::tagTypeNumberRegister);
Jump checkJSInt32 = branch64(AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
Jump checkJSNumber = branchTest64(MacroAssembler::NonZero, gpr, GPRInfo::tagTypeNumberRegister);
checkJSInt32.link(this);
breakpoint();
checkJSNumber.link(this);
......@@ -106,7 +106,7 @@ void AssemblyHelpers::jitAssertIsJSDouble(GPRReg gpr)
void AssemblyHelpers::jitAssertIsCell(GPRReg gpr)
{
Jump checkCell = branchTestPtr(MacroAssembler::Zero, gpr, GPRInfo::tagMaskRegister);
Jump checkCell = branchTest64(MacroAssembler::Zero, gpr, GPRInfo::tagMaskRegister);
breakpoint();
checkCell.link(this);
}
......
......@@ -99,7 +99,11 @@ public:
}
void emitPutToCallFrameHeader(GPRReg from, JSStack::CallFrameHeaderEntry entry)
{
storePtr(from, Address(GPRInfo::callFrameRegister, entry * sizeof(Register)));
#if USE(JSVALUE64)
store64(from, Address(GPRInfo::callFrameRegister, entry * sizeof(Register)));
#else
store32(from, Address(GPRInfo::callFrameRegister, entry * sizeof(Register)));
#endif
}
void emitPutImmediateToCallFrameHeader(void* value, JSStack::CallFrameHeaderEntry entry)
......@@ -110,7 +114,7 @@ public:
Jump branchIfNotCell(GPRReg reg)
{
#if USE(JSVALUE64)
return branchTestPtr(MacroAssembler::NonZero, reg, GPRInfo::tagMaskRegister);
return branchTest64(MacroAssembler::NonZero, reg, GPRInfo::tagMaskRegister);
#else
return branch32(MacroAssembler::NotEqual, reg, TrustedImm32(JSValue::CellTag));
#endif
......@@ -172,8 +176,14 @@ public:
ScratchBuffer* scratchBuffer = m_globalData->scratchBufferForSize(scratchSize);
EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i)
storePtr(GPRInfo::toRegister(i), buffer + i);
for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
#if USE(JSVALUE64)
store64(GPRInfo::toRegister(i), buffer + i);
#else
store32(GPRInfo::toRegister(i), buffer + i);
#endif
}
for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);
......@@ -204,8 +214,13 @@ public:
move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));
}
for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i)
loadPtr(buffer + i, GPRInfo::toRegister(i));
for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
#if USE(JSVALUE64)
load64(buffer + i, GPRInfo::toRegister(i));
#else
load32(buffer + i, GPRInfo::toRegister(i));
#endif
}
}
// These methods JIT generate dynamic, debug-only checks - akin to ASSERTs.
......@@ -229,16 +244,16 @@ public:
#if USE(JSVALUE64)
GPRReg boxDouble(FPRReg fpr, GPRReg gpr)
{
moveDoubleToPtr(fpr, gpr);
subPtr(GPRInfo::tagTypeNumberRegister, gpr);
moveDoubleTo64(fpr, gpr);
sub64(GPRInfo::tagTypeNumberRegister, gpr);
jitAssertIsJSDouble(gpr);
return gpr;
}
FPRReg unboxDouble(GPRReg gpr, FPRReg fpr)
{
jitAssertIsJSDouble(gpr);
addPtr(GPRInfo::tagTypeNumberRegister, gpr);
movePtrToDouble(gpr, fpr);
add64(GPRInfo::tagTypeNumberRegister, gpr);
move64ToDouble(gpr, fpr);
return fpr;
}
#endif
......@@ -258,7 +273,7 @@ public:
Jump emitExceptionCheck(ExceptionCheckKind kind = NormalExceptionCheck)
{
#if USE(JSVALUE64)
return branchTestPtr(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(&globalData()->exception));
return branchTest64(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(&globalData()->exception));
#elif USE(JSVALUE32_64)
return branch32(kind == NormalExceptionCheck ? NotEqual : Equal, AbsoluteAddress(reinterpret_cast<char*>(&globalData()->exception) + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag));
#endif
......
......@@ -552,6 +552,13 @@ public:
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
ALWAYS_INLINE void setupArgumentsWithExecState(GPRReg arg1, TrustedImm64 arg2)
{
move(arg1, GPRInfo::argumentGPR1);
move(arg2, GPRInfo::argumentGPR2);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
ALWAYS_INLINE void setupArgumentsWithExecState(GPRReg arg1, TrustedImm32 arg2)
{
move(arg1, GPRInfo::argumentGPR1);
......@@ -573,6 +580,13 @@ public:
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm64 arg1, GPRReg arg2)
{
move(arg2, GPRInfo::argumentGPR2); // Move this first, so setting arg1 does not trample!
move(arg1, GPRInfo::argumentGPR1);
move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
}
ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2)
{
move(arg2, GPRInfo::argumentGPR2); // Move this first, so setting arg1 does not trample!
......
......@@ -192,7 +192,7 @@ static void generateProtoChainAccessStub(ExecState* exec, StructureStubInfo& stu
if (isInlineOffset(offset)) {
#if USE(JSVALUE64)
stubJit.loadPtr(protoObject->locationForOffset(offset), resultGPR);
stubJit.load64(protoObject->locationForOffset(offset), resultGPR);
#elif USE(JSVALUE32_64)
stubJit.move(MacroAssembler::TrustedImmPtr(protoObject->locationForOffset(offset)), resultGPR);
stubJit.load32(MacroAssembler::Address(resultGPR, OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)), resultTagGPR);
......@@ -201,7 +201,7 @@ static void generateProtoChainAccessStub(ExecState* exec, StructureStubInfo& stu
} else {
stubJit.loadPtr(protoObject->butterflyAddress(), resultGPR);
#if USE(JSVALUE64)
stubJit.loadPtr(MacroAssembler::Address(resultGPR, offsetInButterfly(offset) * sizeof(WriteBarrier<Unknown>)), resultGPR);
stubJit.load64(MacroAssembler::Address(resultGPR, offsetInButterfly(offset) * sizeof(WriteBarrier<Unknown>)), resultGPR);
#elif USE(JSVALUE32_64)
stubJit.load32(MacroAssembler::Address(resultGPR, offsetInButterfly(offset) * sizeof(WriteBarrier<Unknown>) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)), resultTagGPR);
stubJit.load32(MacroAssembler::Address(resultGPR, offsetInButterfly(offset) * sizeof(WriteBarrier<Unknown>) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), resultGPR);
......@@ -263,7 +263,7 @@ static bool tryCacheGetByID(ExecState* exec, JSValue baseValue, const Identifier
failureCases.append(stubJit.branch32(MacroAssembler::LessThan, scratchGPR, MacroAssembler::TrustedImm32(0)));
#if USE(JSVALUE64)
stubJit.orPtr(GPRInfo::tagTypeNumberRegister, scratchGPR, resultGPR);
stubJit.or64(GPRInfo::tagTypeNumberRegister, scratchGPR, resultGPR);
#elif USE(JSVALUE32_64)
stubJit.move(scratchGPR, resultGPR);
stubJit.move(JITCompiler::TrustedImm32(0xffffffff), resultTagGPR); // JSValue::Int32Tag
......@@ -421,14 +421,14 @@ static bool tryBuildGetByIDList(ExecState* exec, JSValue baseValue, const Identi
ASSERT(baseGPR != scratchGPR);
if (isInlineOffset(slot.cachedOffset())) {
#if USE(JSVALUE64)
stubJit.loadPtr(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
stubJit.load64(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
#else
stubJit.load32(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
#endif
} else {
stubJit.loadPtr(MacroAssembler::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR);
#if USE(JSVALUE64)
stubJit.loadPtr(MacroAssembler::Address(scratchGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
stubJit.load64(MacroAssembler::Address(scratchGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
#else
stubJit.load32(MacroAssembler::Address(scratchGPR, offsetRelativeToBase(slot.cachedOffset())), scratchGPR);
#endif
......@@ -465,7 +465,7 @@ static bool tryBuildGetByIDList(ExecState* exec, JSValue baseValue, const Identi
} else {
if (isInlineOffset(slot.cachedOffset())) {
#if USE(JSVALUE64)
stubJit.loadPtr(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset())), resultGPR);
stubJit.load64(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset())), resultGPR);
#else
if (baseGPR == resultTagGPR) {
stubJit.load32(MacroAssembler::Address(baseGPR, offsetRelativeToBase(slot.cachedOffset()) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), resultGPR);
......@@ -478,7 +478,7 @@ static bool tryBuildGetByIDList(ExecState* exec, JSValue baseValue, const Identi
} else {
stubJit.loadPtr(MacroAssembler::Address(baseGPR, JSObject::butterflyOffset()), resultGPR);
#if USE(JSVALUE64)
stubJit.loadPtr(MacroAssembler::Address(resultGPR, offsetRelativeToBase(slot.cachedOffset())), resultGPR);
stubJit.load64(MacroAssembler::Address(resultGPR, offsetRelativeToBase(slot.cachedOffset())), resultGPR);
#else
stubJit.load32(MacroAssembler::Address(resultGPR, offsetRelativeToBase(slot.cachedOffset()) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)), resultTagGPR);
stubJit.load32(MacroAssembler::Address(resultGPR, offsetRelativeToBase(slot.cachedOffset()) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), resultGPR);
......@@ -682,10 +682,10 @@ static void emitPutReplaceStub(
#if USE(JSVALUE64)
if (isInlineOffset(slot.cachedOffset()))
stubJit.storePtr(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
stubJit.store64(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
else {
stubJit.loadPtr(MacroAssembler::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR);
stubJit.storePtr(valueGPR, MacroAssembler::Address(scratchGPR, offsetInButterfly(slot.cachedOffset()) * sizeof(JSValue)));
stubJit.store64(valueGPR, MacroAssembler::Address(scratchGPR, offsetInButterfly(slot.cachedOffset()) * sizeof(JSValue)));
}
#elif USE(JSVALUE32_64)
if (isInlineOffset(slot.cachedOffset())) {
......@@ -854,11 +854,11 @@ static void emitPutTransitionStub(
stubJit.storePtr(MacroAssembler::TrustedImmPtr(structure), MacroAssembler::Address(baseGPR, JSCell::structureOffset()));
#if USE(JSVALUE64)
if (isInlineOffset(slot.cachedOffset()))
stubJit.storePtr(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
stubJit.store64(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
else {
if (!scratchGPR1HasStorage)
stubJit.loadPtr(MacroAssembler::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR1);
stubJit.storePtr(valueGPR, MacroAssembler::Address(scratchGPR1, offsetInButterfly(slot.cachedOffset()) * sizeof(JSValue)));
stubJit.store64(valueGPR, MacroAssembler::Address(scratchGPR1, offsetInButterfly(slot.cachedOffset()) * sizeof(JSValue)));
}
#elif USE(JSVALUE32_64)
if (isInlineOffset(slot.cachedOffset())) {
......
......@@ -127,15 +127,20 @@ public:
{
unsigned count = 0;
for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
if (m_usedRegisters.getGPRByIndex(i))
jit.storePtr(GPRInfo::toRegister(i), scratchBuffer->m_buffer + (count++));
if (m_usedRegisters.getGPRByIndex(i)) {
#if USE(JSVALUE64)
jit.store64(GPRInfo::toRegister(i), static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++));
#else
jit.store32(GPRInfo::toRegister(i), static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++));
#endif
}
if (scratchGPR == InvalidGPRReg && !m_lockedRegisters.getGPRByIndex(i) && !m_scratchRegisters.getGPRByIndex(i))
scratchGPR = GPRInfo::toRegister(i);
}
ASSERT(scratchGPR != InvalidGPRReg);
for (unsigned i = FPRInfo::numberOfRegisters; i--;) {
if (m_usedRegisters.getFPRByIndex(i)) {
jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->m_buffer + (count++)), scratchGPR);
jit.move(MacroAssembler::TrustedImmPtr(static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++)), scratchGPR);
jit.storeDouble(FPRInfo::toRegister(i), scratchGPR);
}
}
......@@ -165,15 +170,20 @@ public:
unsigned count = m_usedRegisters.numberOfSetGPRs();
for (unsigned i = FPRInfo::numberOfRegisters; i--;) {
if (m_usedRegisters.getFPRByIndex(i)) {
jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->m_buffer + (count++)), scratchGPR);
jit.move(MacroAssembler::TrustedImmPtr(static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++)), scratchGPR);
jit.loadDouble(scratchGPR, FPRInfo::toRegister(i));
}
}
count = 0;
for (unsigned i = GPRInfo::numberOfRegisters; i--;) {
if (m_usedRegisters.getGPRByIndex(i))
jit.loadPtr(scratchBuffer->m_buffer + (count++), GPRInfo::toRegister(i));
if (m_usedRegisters.getGPRByIndex(i)) {
#if USE(JSVALUE64)
jit.load64(static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++), GPRInfo::toRegister(i));
#else
jit.load32(static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) + (count++), GPRInfo::toRegister(i));
#endif
}
}
}
......
......@@ -41,6 +41,7 @@ enum SilentSpillAction {
Store32Tag,
Store32Payload,
StorePtr,
Store64,
StoreDouble
};
......@@ -61,6 +62,7 @@ enum SilentFillAction {
Load32Payload,
Load32PayloadBoxInt,
LoadPtr,
Load64,
LoadDouble,
LoadDoubleBoxDouble,
LoadJSUnboxDouble
......
......@@ -1706,14 +1706,14 @@ void SpeculativeJIT::checkArgumentTypes()
#if USE(JSVALUE64)
if (isInt32Speculation(predictedType))
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branchPtr(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
else if (isBooleanSpeculation(predictedType)) {
GPRTemporary temp(this);
m_jit.loadPtr(JITCompiler::addressFor(virtualRegister), temp.gpr());
m_jit.xorPtr(TrustedImm32(static_cast<int32_t>(ValueFalse)), temp.gpr());
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branchTestPtr(MacroAssembler::NonZero, temp.gpr(), TrustedImm32(static_cast<int32_t>(~1))));
m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr());
m_jit.xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), temp.gpr());
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branchTest64(MacroAssembler::NonZero, temp.gpr(), TrustedImm32(static_cast<int32_t>(~1))));
} else if (isCellSpeculation(predictedType))
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branchTestPtr(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
#else
if (isInt32Speculation(predictedType))
speculationCheck(BadType, valueSource, nodeIndex, m_jit.branch32(MacroAssembler::NotEqual, JITCompiler::tagFor(virtualRegister), TrustedImm32(JSValue::Int32Tag)));
......@@ -1973,10 +1973,10 @@ void SpeculativeJIT::compileValueToInt32(Node& node)
FPRTemporary tempFpr(this);
FPRReg fpr = tempFpr.fpr();
JITCompiler::Jump isInteger = m_jit.branchPtr(MacroAssembler::AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
JITCompiler::Jump isInteger = m_jit.branch64(MacroAssembler::AboveOrEqual, gpr, GPRInfo::tagTypeNumberRegister);
if (!isNumberSpeculation(m_state.forNode(node.child1()).m_type))
speculationCheck(BadType, JSValueRegs(gpr), node.child1().index(), m_jit.branchTestPtr(MacroAssembler::Zero, gpr, GPRInfo::tagTypeNumberRegister));
speculationCheck(BadType, JSValueRegs(gpr), node.child1().index(), m_jit.branchTest64(MacroAssembler::Zero, gpr, GPRInfo::tagTypeNumberRegister));
// First, if we get here we have a double encoded as a JSValue
m_jit.move(gpr, resultGpr);
......@@ -2119,8 +2119,8 @@ void SpeculativeJIT::compileInt32ToDouble(Node& node)
ASSERT(isInt32Constant(node.child1().index()));
FPRTemporary result(this);
GPRTemporary temp(this);
m_jit.move(MacroAssembler::ImmPtr(reinterpret_cast<void*>(reinterpretDoubleToIntptr(valueOfNumberConstant(node.child1().index())))), temp.gpr());
m_jit.movePtrToDouble(temp.gpr(), result.fpr());
m_jit.move(MacroAssembler::Imm64(reinterpretDoubleToInt64(valueOfNumberConstant(node.child1().index()))), temp.gpr());
m_jit.move64ToDouble(temp.gpr(), result.fpr());
doubleResult(result.fpr(), m_compileIndex);
return;
}
......@@ -2144,13 +2144,13 @@ void SpeculativeJIT::compileInt32ToDouble(Node& node)
GPRReg tempGPR = temp.gpr();
FPRReg resultFPR = result.fpr();
JITCompiler::Jump isInteger = m_jit.branchPtr(
JITCompiler::Jump isInteger = m_jit.branch64(
MacroAssembler::AboveOrEqual, op1GPR, GPRInfo::tagTypeNumberRegister);
if (!isNumberSpeculation(m_state.forNode(node.child1()).m_type)) {
speculationCheck(
BadType, JSValueRegs(op1GPR), node.child1(),
m_jit.branchTestPtr(MacroAssembler::Zero, op1GPR, GPRInfo::tagTypeNumberRegister));
m_jit.branchTest64(MacroAssembler::Zero, op1GPR, GPRInfo::tagTypeNumberRegister));
}
m_jit.move(op1GPR, tempGPR);
......@@ -2480,20 +2480,18 @@ void SpeculativeJIT::compileInstanceOfForObject(Node&, GPRReg valueReg, GPRReg p
MacroAssembler::Label loop(&m_jit);
m_jit.loadPtr(MacroAssembler::Address(scratchReg, JSCell::structureOffset()), scratchReg);
#if USE(JSVALUE64)
m_jit.loadPtr(MacroAssembler::Address(scratchReg, Structure::prototypeOffset()), scratchReg);
m_jit.load64(MacroAssembler::Address(scratchReg, Structure::prototypeOffset()), scratchReg);
MacroAssembler::Jump isInstance = m_jit.branch64(MacroAssembler::Equal, scratchReg, prototypeReg);
m_jit.branchTest64(MacroAssembler::Zero, scratchReg, GPRInfo::tagMaskRegister).linkTo(loop, &m_jit);
#else
m_jit.load32(MacroAssembler::Address(scratchReg, Structure::prototypeOffset() + OBJECT_OFFSETOF(JSValue, u.asBits.payload)), scratchReg);
#endif
MacroAssembler::Jump isInstance = m_jit.branchPtr(MacroAssembler::Equal, scratchReg, prototypeReg);
#if USE(JSVALUE64)
m_jit.branchTestPtr(MacroAssembler::Zero, scratchReg, GPRInfo::tagMaskRegister).linkTo(loop, &m_jit);
#else
m_jit.branchTest32(MacroAssembler::NonZero, scratchReg).linkTo(loop, &m_jit);
#endif
// No match - result is false.
#if USE(JSVALUE64)
m_jit.move(MacroAssembler::TrustedImmPtr(JSValue::encode(jsBoolean(false))), scratchReg);
m_jit.move(MacroAssembler::TrustedImm64(JSValue::encode(jsBoolean(false))), scratchReg);
#else
m_jit.move(MacroAssembler::TrustedImm32(0), scratchReg);
#endif
......@@ -2501,7 +2499,7 @@ void SpeculativeJIT::compileInstanceOfForObject(Node&, GPRReg valueReg, GPRReg p
isInstance.link(&m_jit);
#if USE(JSVALUE64)
m_jit.move(MacroAssembler::TrustedImmPtr(JSValue::encode(jsBoolean(true))), scratchReg);
m_jit.move(MacroAssembler::TrustedImm64(JSValue::encode(jsBoolean(true))), scratchReg);
#else
m_jit.move(MacroAssembler::TrustedImm32(1), scratchReg);
#endif
......@@ -2527,8 +2525,8 @@ void SpeculativeJIT::compileInstanceOf(Node& node)
#if USE(JSVALUE64)
GPRReg valueReg = value.gpr();
MacroAssembler::Jump isCell = m_jit.branchTestPtr(MacroAssembler::Zero, valueReg, GPRInfo::tagMaskRegister);
m_jit.move(MacroAssembler::TrustedImmPtr(JSValue::encode(jsBoolean(false))), scratchReg);
MacroAssembler::Jump isCell = m_jit.branchTest64(MacroAssembler::Zero, valueReg, GPRInfo::tagMaskRegister);
m_jit.move(MacroAssembler::TrustedImm64(JSValue::encode(jsBoolean(false))), scratchReg);
#else
GPRReg valueTagReg = value.tagGPR();
GPRReg valueReg = value.payloadGPR();
......@@ -3091,7 +3089,7 @@ bool SpeculativeJIT::compileStrictEqForConstant(Node& node, Edge value, JSValue
}
#if USE(JSVALUE64)
branchPtr(condition, op1.gpr(), MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(JSValue::encode(constant))), taken);
branch64(condition, op1.gpr(), MacroAssembler::TrustedImm64(JSValue::encode(constant)), taken);
#else
GPRReg payloadGPR = op1.payloadGPR();
GPRReg tagGPR = op1.tagGPR();
......@@ -3121,8 +3119,8 @@ bool SpeculativeJIT::compileStrictEqForConstant(Node& node, Edge value, JSValue
#if USE(JSVALUE64)
GPRReg op1GPR = op1.gpr();
GPRReg resultGPR = result.gpr();
m_jit.move(MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(ValueFalse)), resultGPR);
MacroAssembler::Jump notEqual = m_jit.branchPtr(MacroAssembler::NotEqual, op1GPR, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(JSValue::encode(constant))));
m_jit.move(MacroAssembler::TrustedImm64(ValueFalse), resultGPR);
MacroAssembler::Jump notEqual = m_jit.branch64(MacroAssembler::NotEqual, op1GPR, MacroAssembler::TrustedImm64(JSValue::encode(constant)));
m_jit.or32(MacroAssembler::TrustedImm32(1), resultGPR);
notEqual.link(&m_jit);
jsValueResult(resultGPR, m_compileIndex, DataFormatJSBoolean);
......@@ -3302,7 +3300,7 @@ void SpeculativeJIT::compileGetByValOnArguments(Node& node)
resultReg);
jsValueResult(resultTagReg, resultReg, m_compileIndex);
#else
m_jit.loadPtr(
m_jit.load64(
MacroAssembler::BaseIndex(
scratchReg, resultReg, MacroAssembler::TimesEight,
CallFrame::thisArgumentOffset() * sizeof(Register) - sizeof(Register)),
......
......@@ -71,6 +71,8 @@ private:
typedef JITCompiler::Imm32 Imm32;
typedef JITCompiler::TrustedImmPtr TrustedImmPtr;
typedef JITCompiler::ImmPtr ImmPtr;
typedef JITCompiler::TrustedImm64 TrustedImm64;
typedef JITCompiler::Imm64 Imm64;
// These constants are used to set priorities for spill order for
// the register allocator.
......@@ -347,9 +349,11 @@ public:
ASSERT(info.gpr() == source);
if (registerFormat == DataFormatInteger)
spillAction = Store32Payload;
else {
ASSERT(registerFormat & DataFormatJS || registerFormat == DataFormatCell || registerFormat == DataFormatStorage);
else if (registerFormat == DataFormatCell || registerFormat == DataFormatStorage)
spillAction = StorePtr;
else {
ASSERT(registerFormat & DataFormatJS);
spillAction = Store64;
}
#elif USE(JSVALUE32_64)
if (registerFormat & DataFormatJS) {
......@@ -414,7 +418,7 @@ public:
ASSERT(registerFormat == DataFormatJSDouble);
fillAction = LoadDoubleBoxDouble;
} else
fillAction = LoadPtr;
fillAction = Load64;
#else
ASSERT(info.tagGPR() == source || info.payloadGPR() == source);
if (node.hasConstant())
......@@ -501,6 +505,11 @@ public:
case StorePtr:
m_jit.storePtr(plan.gpr(), JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()));
break;
#if USE(JSVALUE64)
case Store64:
m_jit.store64(plan.gpr(), JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()));
break;
#endif
case StoreDouble:
m_jit.storeDouble(plan.fpr(), JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()));
break;
......@@ -528,25 +537,25 @@ public:
break;
#if USE(JSVALUE64)
case SetTrustedJSConstant:
m_jit.move(valueOfJSConstantAsImmPtr(plan.nodeIndex()).asTrustedImmPtr(), plan.gpr());
m_jit.move(valueOfJSConstantAsImm64(plan.nodeIndex()).asTrustedImm64(), plan.gpr());
break;
case SetJSConstant:
m_jit.move(valueOfJSConstantAsImmPtr(plan.nodeIndex()), plan.gpr());
m_jit.move(valueOfJSConstantAsImm64(plan.nodeIndex()), plan.gpr());
break;
case SetDoubleConstant:
m_jit.move(ImmPtr(bitwise_cast<void*>(valueOfNumberConstant(plan.nodeIndex()))), canTrample);
m_jit.movePtrToDouble(canTrample, plan.fpr());
m_jit.move(Imm64(valueOfNumberConstant(plan.nodeIndex())), canTrample);
m_jit.move64ToDouble(canTrample, plan.fpr());
break;
case Load32PayloadBoxInt:
m_jit.load32(JITCompiler::payloadFor(at(plan.nodeIndex()).virtualRegister()), plan.gpr());
m_jit.orPtr(GPRInfo::tagTypeNumberRegister, plan.gpr());
m_jit.or64(GPRInfo::tagTypeNumberRegister, plan.gpr());
break;
case LoadDoubleBoxDouble:
m_jit.loadPtr(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), plan.gpr());
m_jit.subPtr(GPRInfo::tagTypeNumberRegister, plan.gpr());
m_jit.load64(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), plan.gpr());
m_jit.sub64(GPRInfo::tagTypeNumberRegister, plan.gpr());
break;
case LoadJSUnboxDouble:
m_jit.loadPtr(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), canTrample);
m_jit.load64(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), canTrample);
unboxDouble(canTrample, plan.fpr());
break;
#else
......@@ -578,6 +587,11 @@ public:
case LoadPtr:
m_jit.loadPtr(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), plan.gpr());
break;
#if USE(JSVALUE64)
case Load64:
m_jit.load64(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), plan.gpr());
break;
#endif
case LoadDouble:
m_jit.loadDouble(JITCompiler::addressFor(at(plan.nodeIndex()).virtualRegister()), plan.fpr());
break;
......@@ -752,10 +766,10 @@ public:
// We need to box int32 and cell values ...
// but on JSVALUE64 boxing a cell is a no-op!
if (spillFormat == DataFormatInteger)
m_jit.orPtr(GPRInfo::tagTypeNumberRegister, reg);