Commit 2ac511cb authored by fpizlo@apple.com's avatar fpizlo@apple.com

All JIT stubs should go through the getCTIStub API

https://bugs.webkit.org/show_bug.cgi?id=105750

Reviewed by Sam Weinig.
        
Previously JITThunks had two sets of thunks: one static set stored in a struct,
which was filled by JIT::privateCompileCTITrampolines, and another set stored in
a HashMap. Moreover, the code to generate the code for the CTI trampoline struct
had loads of copy-paste between JSVALUE32_64 and JSVALUE64, and was total
unmodular with respect to calls versus constructors, among other things.
                  
This changeset removes this struct and rationalizes the code that generates those
thunks. All of thunks are now generated through the getCTIStub HashMap API. All
thunks for the baseline JIT now use the JSInterfaceJIT and have their codegen
located in ThunkGenerators.cpp. All thunks now share as much code as possible -
it turns out that they are almost 100% identical between 32_64 and 64, so that
works out great. A bunch of call vs. construct duplication was eliminated. And,
most of the call link versus virtual call duplication was also eliminated.
        
This does not change behavior but it does make it easier to add more thunks in
the future.

* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::unlink):
* jit/JIT.cpp:
(JSC::JIT::linkFor):
* jit/JIT.h:
(JIT):
* jit/JITCall.cpp:
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITInlines.h:
(JSC):
* jit/JITOpcodes.cpp:
(JSC):
(JSC::JIT::privateCompileCTINativeCall):
* jit/JITOpcodes32_64.cpp:
(JSC):
* jit/JITStubs.cpp:
(JSC::tryCacheGetByID):
* jit/JITThunks.cpp:
(JSC::JITThunks::JITThunks):
(JSC::JITThunks::ctiNativeCall):
(JSC::JITThunks::ctiNativeConstruct):
(JSC):
(JSC::JITThunks::hostFunctionStub):
* jit/JITThunks.h:
(JSC):
(JITThunks):
* jit/JSInterfaceJIT.h:
(JSInterfaceJIT):
(JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
(JSC):
(JSC::JSInterfaceJIT::emitFastArithIntToImmNoCheck):
(JSC::JSInterfaceJIT::emitJumpIfNotType):
(JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
(JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutImmediateToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
(JSC::JSInterfaceJIT::preserveReturnAddressAfterCall):
(JSC::JSInterfaceJIT::restoreReturnAddressBeforeReturn):
(JSC::JSInterfaceJIT::restoreArgumentReference):
* jit/ThunkGenerators.cpp:
(JSC::generateSlowCaseFor):
(JSC):
(JSC::linkForGenerator):
(JSC::linkCallGenerator):
(JSC::linkConstructGenerator):
(JSC::virtualForGenerator):
(JSC::virtualCallGenerator):
(JSC::virtualConstructGenerator):
(JSC::stringLengthTrampolineGenerator):
(JSC::nativeForGenerator):
(JSC::nativeCallGenerator):
(JSC::nativeConstructGenerator):
(JSC::charCodeAtThunkGenerator):
(JSC::charAtThunkGenerator):
(JSC::fromCharCodeThunkGenerator):
(JSC::sqrtThunkGenerator):
(JSC::floorThunkGenerator):
(JSC::ceilThunkGenerator):
(JSC::roundThunkGenerator):
(JSC::expThunkGenerator):
(JSC::logThunkGenerator):
(JSC::absThunkGenerator):
(JSC::powThunkGenerator):
* jit/ThunkGenerators.h:
(JSC):
* runtime/Executable.h:
(NativeExecutable):
(JSC::NativeExecutable::nativeFunctionFor):
(JSC::NativeExecutable::offsetOfNativeFunctionFor):



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@138516 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent b19b6a2d
2012-12-26 Filip Pizlo <fpizlo@apple.com>
All JIT stubs should go through the getCTIStub API
https://bugs.webkit.org/show_bug.cgi?id=105750
Reviewed by Sam Weinig.
Previously JITThunks had two sets of thunks: one static set stored in a struct,
which was filled by JIT::privateCompileCTITrampolines, and another set stored in
a HashMap. Moreover, the code to generate the code for the CTI trampoline struct
had loads of copy-paste between JSVALUE32_64 and JSVALUE64, and was total
unmodular with respect to calls versus constructors, among other things.
This changeset removes this struct and rationalizes the code that generates those
thunks. All of thunks are now generated through the getCTIStub HashMap API. All
thunks for the baseline JIT now use the JSInterfaceJIT and have their codegen
located in ThunkGenerators.cpp. All thunks now share as much code as possible -
it turns out that they are almost 100% identical between 32_64 and 64, so that
works out great. A bunch of call vs. construct duplication was eliminated. And,
most of the call link versus virtual call duplication was also eliminated.
This does not change behavior but it does make it easier to add more thunks in
the future.
* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::unlink):
* jit/JIT.cpp:
(JSC::JIT::linkFor):
* jit/JIT.h:
(JIT):
* jit/JITCall.cpp:
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITInlines.h:
(JSC):
* jit/JITOpcodes.cpp:
(JSC):
(JSC::JIT::privateCompileCTINativeCall):
* jit/JITOpcodes32_64.cpp:
(JSC):
* jit/JITStubs.cpp:
(JSC::tryCacheGetByID):
* jit/JITThunks.cpp:
(JSC::JITThunks::JITThunks):
(JSC::JITThunks::ctiNativeCall):
(JSC::JITThunks::ctiNativeConstruct):
(JSC):
(JSC::JITThunks::hostFunctionStub):
* jit/JITThunks.h:
(JSC):
(JITThunks):
* jit/JSInterfaceJIT.h:
(JSInterfaceJIT):
(JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
(JSC):
(JSC::JSInterfaceJIT::emitFastArithIntToImmNoCheck):
(JSC::JSInterfaceJIT::emitJumpIfNotType):
(JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
(JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutImmediateToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
(JSC::JSInterfaceJIT::preserveReturnAddressAfterCall):
(JSC::JSInterfaceJIT::restoreReturnAddressBeforeReturn):
(JSC::JSInterfaceJIT::restoreArgumentReference):
* jit/ThunkGenerators.cpp:
(JSC::generateSlowCaseFor):
(JSC):
(JSC::linkForGenerator):
(JSC::linkCallGenerator):
(JSC::linkConstructGenerator):
(JSC::virtualForGenerator):
(JSC::virtualCallGenerator):
(JSC::virtualConstructGenerator):
(JSC::stringLengthTrampolineGenerator):
(JSC::nativeForGenerator):
(JSC::nativeCallGenerator):
(JSC::nativeConstructGenerator):
(JSC::charCodeAtThunkGenerator):
(JSC::charAtThunkGenerator):
(JSC::fromCharCodeThunkGenerator):
(JSC::sqrtThunkGenerator):
(JSC::floorThunkGenerator):
(JSC::ceilThunkGenerator):
(JSC::roundThunkGenerator):
(JSC::expThunkGenerator):
(JSC::logThunkGenerator):
(JSC::absThunkGenerator):
(JSC::powThunkGenerator):
* jit/ThunkGenerators.h:
(JSC):
* runtime/Executable.h:
(NativeExecutable):
(JSC::NativeExecutable::nativeFunctionFor):
(JSC::NativeExecutable::offsetOfNativeFunctionFor):
2012-12-25 Gyuyoung Kim <gyuyoung.kim@samsung.com>
[CMAKE] Remove header files in JavaScriptCore/CMakeLists.txt
......
......@@ -45,7 +45,7 @@ void CallLinkInfo::unlink(JSGlobalData& globalData, RepatchBuffer& repatchBuffer
ASSERT_NOT_REACHED();
#endif
} else
repatchBuffer.relink(callReturnLocation, callType == Construct ? globalData.jitStubs->ctiVirtualConstructLink() : globalData.jitStubs->ctiVirtualCallLink());
repatchBuffer.relink(callReturnLocation, callType == Construct ? globalData.getCTIStub(linkConstructGenerator).code() : globalData.getCTIStub(linkCallGenerator).code());
hasSeenShouldRepatch = false;
callee.clear();
stub.clear();
......
......@@ -870,12 +870,12 @@ void JIT::linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* cal
// Patch the slow patch so we do not continue to try to link.
if (kind == CodeForCall) {
repatchBuffer.relink(callLinkInfo->callReturnLocation, globalData->jitStubs->ctiVirtualCall());
repatchBuffer.relink(callLinkInfo->callReturnLocation, globalData->getCTIStub(virtualCallGenerator).code());
return;
}
ASSERT(kind == CodeForConstruct);
repatchBuffer.relink(callLinkInfo->callReturnLocation, globalData->jitStubs->ctiVirtualConstruct());
repatchBuffer.relink(callLinkInfo->callReturnLocation, globalData->getCTIStub(virtualConstructGenerator).code());
}
} // namespace JSC
......
......@@ -359,14 +359,6 @@ namespace JSC {
jit.privateCompilePutByVal(byValInfo, returnAddress, arrayMode);
}
static PassRefPtr<ExecutableMemoryHandle> compileCTIMachineTrampolines(JSGlobalData* globalData, TrampolineStructure *trampolines)
{
if (!globalData->canUseJIT())
return 0;
JIT jit(globalData, 0);
return jit.privateCompileCTIMachineTrampolines(globalData, trampolines);
}
static CodeRef compileCTINativeCall(JSGlobalData* globalData, NativeFunction func)
{
if (!globalData->canUseJIT()) {
......@@ -415,7 +407,6 @@ namespace JSC {
void privateCompileGetByVal(ByValInfo*, ReturnAddressPtr, JITArrayMode);
void privateCompilePutByVal(ByValInfo*, ReturnAddressPtr, JITArrayMode);
PassRefPtr<ExecutableMemoryHandle> privateCompileCTIMachineTrampolines(JSGlobalData*, TrampolineStructure*);
Label privateCompileCTINativeCall(JSGlobalData*, bool isConstruct = false);
CodeRef privateCompileCTINativeCall(JSGlobalData*, NativeFunction);
void privateCompilePatchGetArrayLength(ReturnAddressPtr returnAddress);
......@@ -441,7 +432,6 @@ namespace JSC {
void emitLoadDouble(int index, FPRegisterID value);
void emitLoadInt32ToDouble(int index, FPRegisterID value);
Jump emitJumpIfNotObject(RegisterID structureReg);
Jump emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType);
Jump addStructureTransitionCheck(JSCell*, Structure*, StructureStubInfo*, RegisterID scratch);
void addStructureTransitionCheck(JSCell*, Structure*, StructureStubInfo*, JumpList& failureCases, RegisterID scratch);
......@@ -596,7 +586,6 @@ namespace JSC {
Jump emitJumpIfJSCell(RegisterID);
Jump emitJumpIfBothJSCells(RegisterID, RegisterID, RegisterID);
void emitJumpSlowCaseIfJSCell(RegisterID);
Jump emitJumpIfNotJSCell(RegisterID);
void emitJumpSlowCaseIfNotJSCell(RegisterID);
void emitJumpSlowCaseIfNotJSCell(RegisterID, int VReg);
Jump emitJumpIfImmediateInteger(RegisterID);
......@@ -607,7 +596,6 @@ namespace JSC {
void emitJumpSlowCaseIfNotImmediateIntegers(RegisterID, RegisterID, RegisterID);
void emitFastArithReTagImmediate(RegisterID src, RegisterID dest);
void emitFastArithIntToImmNoCheck(RegisterID src, RegisterID dest);
void emitTagAsBoolImmediate(RegisterID reg);
void compileBinaryArithOp(OpcodeID, unsigned dst, unsigned src1, unsigned src2, OperandTypes opi);
......@@ -823,10 +811,7 @@ namespace JSC {
void emitInitRegister(unsigned dst);
void emitPutToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
void emitPutCellToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
void emitPutIntToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
void emitPutImmediateToCallFrameHeader(void* value, JSStack::CallFrameHeaderEntry);
void emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry, RegisterID to, RegisterID from = callFrameRegister);
void emitGetFromCallFrameHeader32(JSStack::CallFrameHeaderEntry, RegisterID to, RegisterID from = callFrameRegister);
#if USE(JSVALUE64)
......@@ -857,16 +842,11 @@ namespace JSC {
Jump checkStructure(RegisterID reg, Structure* structure);
void restoreArgumentReference();
void restoreArgumentReferenceForTrampoline();
void updateTopCallFrame();
Call emitNakedCall(CodePtr function = CodePtr());
void preserveReturnAddressAfterCall(RegisterID);
void restoreReturnAddressBeforeReturn(RegisterID);
void restoreReturnAddressBeforeReturn(Address);
// Loads the character value of a single character string into dst.
void emitLoadCharacterString(RegisterID src, RegisterID dst, JumpList& failures);
......
......@@ -135,7 +135,7 @@ void JIT::compileCallEvalSlowCase(Vector<SlowCaseEntry>::iterator& iter)
linkSlowCase(iter);
emitGetFromCallFrameHeader64(JSStack::Callee, regT0);
emitNakedCall(m_globalData->jitStubs->ctiVirtualCall());
emitNakedCall(m_globalData->getCTIStub(virtualCallGenerator).code());
sampleCodeBlock(m_codeBlock);
}
......@@ -216,7 +216,7 @@ void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction*, Vector<SlowCase
linkSlowCase(iter);
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_globalData->jitStubs->ctiVirtualConstructLink() : m_globalData->jitStubs->ctiVirtualCallLink());
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_globalData->getCTIStub(linkConstructGenerator).code() : m_globalData->getCTIStub(linkCallGenerator).code());
sampleCodeBlock(m_codeBlock);
}
......
......@@ -212,7 +212,7 @@ void JIT::compileCallEvalSlowCase(Vector<SlowCaseEntry>::iterator& iter)
linkSlowCase(iter);
emitLoad(JSStack::Callee, regT1, regT0);
emitNakedCall(m_globalData->jitStubs->ctiVirtualCall());
emitNakedCall(m_globalData->getCTIStub(virtualCallGenerator).code());
sampleCodeBlock(m_codeBlock);
}
......@@ -297,7 +297,7 @@ void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction*, Vector<SlowCase
linkSlowCase(iter);
linkSlowCase(iter);
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_globalData->jitStubs->ctiVirtualConstructLink() : m_globalData->jitStubs->ctiVirtualCallLink());
m_callStructureStubCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(opcodeID == op_construct ? m_globalData->getCTIStub(linkConstructGenerator).code() : m_globalData->getCTIStub(linkCallGenerator).code());
sampleCodeBlock(m_codeBlock);
}
......
......@@ -42,16 +42,6 @@ ALWAYS_INLINE JSValue JIT::getConstantOperand(unsigned src)
return m_codeBlock->getConstant(src);
}
ALWAYS_INLINE void JIT::emitPutCellToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry entry)
{
#if USE(JSVALUE32_64)
store32(TrustedImm32(JSValue::CellTag), tagFor(entry, callFrameRegister));
store32(from, payloadFor(entry, callFrameRegister));
#else
store64(from, addressFor(entry, callFrameRegister));
#endif
}
ALWAYS_INLINE void JIT::emitPutIntToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry entry)
{
#if USE(JSVALUE32_64)
......@@ -62,20 +52,6 @@ ALWAYS_INLINE void JIT::emitPutIntToCallFrameHeader(RegisterID from, JSStack::Ca
#endif
}
ALWAYS_INLINE void JIT::emitPutToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry entry)
{
#if USE(JSVALUE32_64)
storePtr(from, payloadFor(entry, callFrameRegister));
#else
store64(from, addressFor(entry, callFrameRegister));
#endif
}
ALWAYS_INLINE void JIT::emitPutImmediateToCallFrameHeader(void* value, JSStack::CallFrameHeaderEntry entry)
{
storePtr(TrustedImmPtr(value), Address(callFrameRegister, entry * sizeof(Register)));
}
ALWAYS_INLINE void JIT::emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry entry, RegisterID to, RegisterID from)
{
loadPtr(Address(from, entry * sizeof(Register)), to);
......@@ -195,81 +171,6 @@ ALWAYS_INLINE void JIT::endUninterruptedSequence(int insnSpace, int constSpace,
#endif
#if CPU(ARM)
ALWAYS_INLINE void JIT::preserveReturnAddressAfterCall(RegisterID reg)
{
move(linkRegister, reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
move(reg, linkRegister);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtr(address, linkRegister);
}
#elif CPU(SH4)
ALWAYS_INLINE void JIT::preserveReturnAddressAfterCall(RegisterID reg)
{
m_assembler.stspr(reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
m_assembler.ldspr(reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtrLinkReg(address);
}
#elif CPU(MIPS)
ALWAYS_INLINE void JIT::preserveReturnAddressAfterCall(RegisterID reg)
{
move(returnAddressRegister, reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
move(reg, returnAddressRegister);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtr(address, returnAddressRegister);
}
#else // CPU(X86) || CPU(X86_64)
ALWAYS_INLINE void JIT::preserveReturnAddressAfterCall(RegisterID reg)
{
pop(reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
push(reg);
}
ALWAYS_INLINE void JIT::restoreReturnAddressBeforeReturn(Address address)
{
push(address);
}
#endif
ALWAYS_INLINE void JIT::restoreArgumentReference()
{
move(stackPointerRegister, firstArgumentRegister);
poke(callFrameRegister, OBJECT_OFFSETOF(struct JITStackFrame, callFrame) / sizeof(void*));
}
ALWAYS_INLINE void JIT::updateTopCallFrame()
{
ASSERT(static_cast<int>(m_bytecodeOffset) >= 0);
......@@ -351,12 +252,6 @@ ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotObject(RegisterID structureReg)
return branch8(Below, Address(structureReg, Structure::typeInfoTypeOffset()), TrustedImm32(ObjectType));
}
ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType type)
{
loadPtr(Address(baseReg, JSCell::structureOffset()), scratchReg);
return branch8(NotEqual, Address(scratchReg, Structure::typeInfoTypeOffset()), TrustedImm32(type));
}
#if ENABLE(SAMPLING_FLAGS)
ALWAYS_INLINE void JIT::setSamplingFlag(int32_t flag)
{
......@@ -928,11 +823,6 @@ ALWAYS_INLINE void JIT::emitJumpSlowCaseIfJSCell(RegisterID reg)
addSlowCase(emitJumpIfJSCell(reg));
}
ALWAYS_INLINE JIT::Jump JIT::emitJumpIfNotJSCell(RegisterID reg)
{
return branchTest64(NonZero, reg, tagMaskRegister);
}
ALWAYS_INLINE void JIT::emitJumpSlowCaseIfNotJSCell(RegisterID reg)
{
addSlowCase(emitJumpIfNotJSCell(reg));
......@@ -999,14 +889,6 @@ ALWAYS_INLINE void JIT::emitFastArithReTagImmediate(RegisterID src, RegisterID d
emitFastArithIntToImmNoCheck(src, dest);
}
// operand is int32_t, must have been zero-extended if register is 64-bit.
ALWAYS_INLINE void JIT::emitFastArithIntToImmNoCheck(RegisterID src, RegisterID dest)
{
if (src != dest)
move(src, dest);
or64(tagTypeNumberRegister, dest);
}
ALWAYS_INLINE void JIT::emitTagAsBoolImmediate(RegisterID reg)
{
or32(TrustedImm32(static_cast<int32_t>(ValueFalse)), reg);
......
This diff is collapsed.
......@@ -851,7 +851,7 @@ NEVER_INLINE static void tryCacheGetByID(CallFrame* callFrame, CodeBlock* codeBl
if (isJSString(baseValue) && propertyName == callFrame->propertyNames().length) {
// The tradeoff of compiling an patched inline string length access routine does not seem
// to pay off, so we currently only do this for arrays.
ctiPatchCallByReturnAddress(codeBlock, returnAddress, globalData->jitStubs->ctiStringLengthTrampoline());
ctiPatchCallByReturnAddress(codeBlock, returnAddress, globalData->getCTIStub(stringLengthTrampolineGenerator).code());
return;
}
......
......@@ -41,8 +41,6 @@ JITThunks::JITThunks(JSGlobalData* globalData)
if (!globalData->canUseJIT())
return;
m_executableMemory = JIT::compileCTIMachineTrampolines(globalData, &m_trampolineStructure);
ASSERT(!!m_executableMemory);
#if CPU(ARM_THUMB2)
// Unfortunate the arm compiler does not like the use of offsetof on JITStackFrame (since it contains non POD types),
// and the OBJECT_OFFSETOF macro does not appear constantish enough for it to be happy with its use in COMPILE_ASSERT
......@@ -86,6 +84,23 @@ JITThunks::~JITThunks()
{
}
MacroAssemblerCodePtr JITThunks::ctiNativeCall(JSGlobalData* globalData)
{
#if ENABLE(LLINT)
if (!globalData->canUseJIT())
return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
#endif
return ctiStub(globalData, nativeCallGenerator).code();
}
MacroAssemblerCodePtr JITThunks::ctiNativeConstruct(JSGlobalData* globalData)
{
#if ENABLE(LLINT)
if (!globalData->canUseJIT())
return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
#endif
return ctiStub(globalData, nativeConstructGenerator).code();
}
MacroAssemblerCodeRef JITThunks::ctiStub(JSGlobalData* globalData, ThunkGenerator generator)
{
CTIStubMap::AddResult entry = m_ctiStubMap.add(generator, MacroAssemblerCodeRef());
......@@ -99,7 +114,7 @@ NativeExecutable* JITThunks::hostFunctionStub(JSGlobalData* globalData, NativeFu
if (NativeExecutable* nativeExecutable = m_hostFunctionStubMap->get(function))
return nativeExecutable;
NativeExecutable* nativeExecutable = NativeExecutable::create(*globalData, JIT::compileCTINativeCall(globalData, function), function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct()), constructor, NoIntrinsic);
NativeExecutable* nativeExecutable = NativeExecutable::create(*globalData, JIT::compileCTINativeCall(globalData, function), function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct(globalData)), constructor, NoIntrinsic);
weakAdd(*m_hostFunctionStubMap, function, PassWeak<NativeExecutable>(nativeExecutable));
return nativeExecutable;
}
......@@ -118,7 +133,7 @@ NativeExecutable* JITThunks::hostFunctionStub(JSGlobalData* globalData, NativeFu
} else
code = JIT::compileCTINativeCall(globalData, function);
NativeExecutable* nativeExecutable = NativeExecutable::create(*globalData, code, function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct()), callHostFunctionAsConstructor, intrinsic);
NativeExecutable* nativeExecutable = NativeExecutable::create(*globalData, code, function, MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct(globalData)), callHostFunctionAsConstructor, intrinsic);
weakAdd(*m_hostFunctionStubMap, function, PassWeak<NativeExecutable>(nativeExecutable));
return nativeExecutable;
}
......
......@@ -44,44 +44,13 @@ namespace JSC {
class JSGlobalData;
class NativeExecutable;
struct TrampolineStructure {
MacroAssemblerCodePtr ctiStringLengthTrampoline;
MacroAssemblerCodePtr ctiVirtualCallLink;
MacroAssemblerCodePtr ctiVirtualConstructLink;
MacroAssemblerCodePtr ctiVirtualCall;
MacroAssemblerCodePtr ctiVirtualConstruct;
MacroAssemblerCodePtr ctiNativeCall;
MacroAssemblerCodePtr ctiNativeConstruct;
};
class JITThunksPrivateData;
class JITThunks {
public:
JITThunks(JSGlobalData*);
~JITThunks();
MacroAssemblerCodePtr ctiStringLengthTrampoline() { return m_trampolineStructure.ctiStringLengthTrampoline; }
MacroAssemblerCodePtr ctiVirtualCallLink() { return m_trampolineStructure.ctiVirtualCallLink; }
MacroAssemblerCodePtr ctiVirtualConstructLink() { return m_trampolineStructure.ctiVirtualConstructLink; }
MacroAssemblerCodePtr ctiVirtualCall() { return m_trampolineStructure.ctiVirtualCall; }
MacroAssemblerCodePtr ctiVirtualConstruct() { return m_trampolineStructure.ctiVirtualConstruct; }
MacroAssemblerCodePtr ctiNativeCall()
{
#if ENABLE(LLINT)
if (!m_executableMemory)
return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
#endif
return m_trampolineStructure.ctiNativeCall;
}
MacroAssemblerCodePtr ctiNativeConstruct()
{
#if ENABLE(LLINT)
if (!m_executableMemory)
return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
#endif
return m_trampolineStructure.ctiNativeConstruct;
}
MacroAssemblerCodePtr ctiNativeCall(JSGlobalData*);
MacroAssemblerCodePtr ctiNativeConstruct(JSGlobalData*);
MacroAssemblerCodeRef ctiStub(JSGlobalData*, ThunkGenerator);
......@@ -95,9 +64,6 @@ private:
CTIStubMap m_ctiStubMap;
typedef HashMap<NativeFunction, Weak<NativeExecutable> > HostFunctionStubMap;
OwnPtr<HostFunctionStubMap> m_hostFunctionStubMap;
RefPtr<ExecutableMemoryHandle> m_executableMemory;
TrampolineStructure m_trampolineStructure;
};
} // namespace JSC
......
......@@ -189,11 +189,25 @@ namespace JSC {
#endif
#if USE(JSVALUE64)
Jump emitJumpIfNotJSCell(RegisterID);
Jump emitJumpIfImmediateNumber(RegisterID reg);
Jump emitJumpIfNotImmediateNumber(RegisterID reg);
void emitFastArithImmToInt(RegisterID reg);
void emitFastArithIntToImmNoCheck(RegisterID src, RegisterID dest);
#endif
Jump emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType);
void emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry, RegisterID to, RegisterID from = callFrameRegister);
void emitPutToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
void emitPutImmediateToCallFrameHeader(void* value, JSStack::CallFrameHeaderEntry);
void emitPutCellToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry);
void preserveReturnAddressAfterCall(RegisterID);
void restoreReturnAddressBeforeReturn(RegisterID);
void restoreReturnAddressBeforeReturn(Address);
void restoreArgumentReference();
inline Address payloadFor(int index, RegisterID base = callFrameRegister);
inline Address intPayloadFor(int index, RegisterID base = callFrameRegister);
inline Address intTagFor(int index, RegisterID base = callFrameRegister);
......@@ -268,6 +282,11 @@ namespace JSC {
#endif
#if USE(JSVALUE64)
ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotJSCell(RegisterID reg)
{
return branchTest64(NonZero, reg, tagMaskRegister);
}
ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfImmediateNumber(RegisterID reg)
{
return branchTest64(NonZero, reg, tagTypeNumberRegister);
......@@ -308,6 +327,13 @@ namespace JSC {
{
}
// operand is int32_t, must have been zero-extended if register is 64-bit.
ALWAYS_INLINE void JSInterfaceJIT::emitFastArithIntToImmNoCheck(RegisterID src, RegisterID dest)
{
if (src != dest)
move(src, dest);
or64(tagTypeNumberRegister, dest);
}
#endif
#if USE(JSVALUE64)
......@@ -329,12 +355,122 @@ namespace JSC {
}
#endif
ALWAYS_INLINE JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotType(RegisterID baseReg, RegisterID scratchReg, JSType type)
{
loadPtr(Address(baseReg, JSCell::structureOffset()), scratchReg);
return branch8(NotEqual, Address(scratchReg, Structure::typeInfoTypeOffset()), TrustedImm32(type));
}
ALWAYS_INLINE void JSInterfaceJIT::emitGetFromCallFrameHeaderPtr(JSStack::CallFrameHeaderEntry entry, RegisterID to, RegisterID from)
{
loadPtr(Address(from, entry * sizeof(Register)), to);
}
ALWAYS_INLINE void JSInterfaceJIT::emitPutToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry entry)
{
#if USE(JSVALUE32_64)
storePtr(from, payloadFor(entry, callFrameRegister));
#else
store64(from, addressFor(entry, callFrameRegister));
#endif
}
ALWAYS_INLINE void JSInterfaceJIT::emitPutImmediateToCallFrameHeader(void* value, JSStack::CallFrameHeaderEntry entry)
{
storePtr(TrustedImmPtr(value), Address(callFrameRegister, entry * sizeof(Register)));
}
ALWAYS_INLINE void JSInterfaceJIT::emitPutCellToCallFrameHeader(RegisterID from, JSStack::CallFrameHeaderEntry entry)
{
#if USE(JSVALUE32_64)
store32(TrustedImm32(JSValue::CellTag), tagFor(entry, callFrameRegister));
store32(from, payloadFor(entry, callFrameRegister));
#else
store64(from, addressFor(entry, callFrameRegister));
#endif
}
inline JSInterfaceJIT::Address JSInterfaceJIT::addressFor(int virtualRegisterIndex, RegisterID base)
{
ASSERT(virtualRegisterIndex < FirstConstantRegisterIndex);
return Address(base, (static_cast<unsigned>(virtualRegisterIndex) * sizeof(Register)));
}
#if CPU(ARM)
ALWAYS_INLINE void JSInterfaceJIT::preserveReturnAddressAfterCall(RegisterID reg)
{
move(linkRegister, reg);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
move(reg, linkRegister);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtr(address, linkRegister);
}
#elif CPU(SH4)
ALWAYS_INLINE void JSInterfaceJIT::preserveReturnAddressAfterCall(RegisterID reg)
{
m_assembler.stspr(reg);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
m_assembler.ldspr(reg);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtrLinkReg(address);
}
#elif CPU(MIPS)
ALWAYS_INLINE void JSInterfaceJIT::preserveReturnAddressAfterCall(RegisterID reg)
{
move(returnAddressRegister, reg);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(RegisterID reg)
{
move(reg, returnAddressRegister);
}
ALWAYS_INLINE void JSInterfaceJIT::restoreReturnAddressBeforeReturn(Address address)
{
loadPtr(address, returnAddressRegister);
}
#else // CPU(X86) || CPU(X86_64)
ALWAYS_INLINE void JSInterfaceJIT::preserveReturnAddressAfterCall(RegisterID reg)
{
pop(reg);
}