Commit d49bfe80 authored by fpizlo@apple.com's avatar fpizlo@apple.com

A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using...

A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using code origins and machine code PCs
https://bugs.webkit.org/show_bug.cgi?id=122940

Source/JavaScriptCore: 

Reviewed by Oliver Hunt.
        
This accomplishes a number of simplifications. StructureStubInfo is now non-moving,
whereas previously it was in a Vector, so it moved. This allows you to use pointers to
StructureStubInfo. This also eliminates the use of return PC as a way of finding the
StructureStubInfo's. It removes some of the need for the compile-time property access
records; for example the DFG no longer has to save information about registers in a
property access record only to later save it to the stub info.
        
The main thing is accomplishes is that it makes it easier to add StructureStubInfo's
at any stage of compilation.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::addStubInfo):
(JSC::CodeBlock::getStubInfoMap):
(JSC::CodeBlock::shrinkToFit):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::begin):
(JSC::CodeBlock::end):
(JSC::CodeBlock::rareCaseProfileForBytecodeOffset):
* bytecode/CodeOrigin.h:
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::isHashTableDeletedValue):
(JSC::CodeOrigin::hash):
(JSC::CodeOriginHash::hash):
(JSC::CodeOriginHash::equal):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/GetByIdStatus.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/PutByIdStatus.h:
* bytecode/StructureStubInfo.h:
(JSC::getStructureStubInfoCodeOrigin):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::PropertyAccessRecord::PropertyAccessRecord):
(JSC::DFG::InRecord::InRecord):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileIn):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/JIT.cpp:
(JSC::PropertyStubCompilationInfo::copyToStubInfo):
(JSC::JIT::privateCompile):
* jit/JIT.h:
(JSC::PropertyStubCompilationInfo::slowCaseInfo):
* jit/JITInlines.h:
(JSC::JIT::callOperation):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/Repatch.cpp:
(JSC::appropriateGenericPutByIdFunction):
(JSC::appropriateListBuildingPutByIdFunction):
(JSC::resetPutByID):

Source/WTF: 

Reviewed by Oliver Hunt.

* GNUmakefile.list.am:
* WTF.vcxproj/WTF.vcxproj:
* WTF.xcodeproj/project.pbxproj:
* wtf/BagToHashMap.h: Added.
(WTF::toHashMap):
* wtf/CMakeLists.txt:



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@157660 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 5a650be7
2013-10-18 Filip Pizlo <fpizlo@apple.com>
A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using code origins and machine code PCs
https://bugs.webkit.org/show_bug.cgi?id=122940
Reviewed by Oliver Hunt.
This accomplishes a number of simplifications. StructureStubInfo is now non-moving,
whereas previously it was in a Vector, so it moved. This allows you to use pointers to
StructureStubInfo. This also eliminates the use of return PC as a way of finding the
StructureStubInfo's. It removes some of the need for the compile-time property access
records; for example the DFG no longer has to save information about registers in a
property access record only to later save it to the stub info.
The main thing is accomplishes is that it makes it easier to add StructureStubInfo's
at any stage of compilation.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::addStubInfo):
(JSC::CodeBlock::getStubInfoMap):
(JSC::CodeBlock::shrinkToFit):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::begin):
(JSC::CodeBlock::end):
(JSC::CodeBlock::rareCaseProfileForBytecodeOffset):
* bytecode/CodeOrigin.h:
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::isHashTableDeletedValue):
(JSC::CodeOrigin::hash):
(JSC::CodeOriginHash::hash):
(JSC::CodeOriginHash::equal):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/GetByIdStatus.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/PutByIdStatus.h:
* bytecode/StructureStubInfo.h:
(JSC::getStructureStubInfoCodeOrigin):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::PropertyAccessRecord::PropertyAccessRecord):
(JSC::DFG::InRecord::InRecord):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileIn):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/JIT.cpp:
(JSC::PropertyStubCompilationInfo::copyToStubInfo):
(JSC::JIT::privateCompile):
* jit/JIT.h:
(JSC::PropertyStubCompilationInfo::slowCaseInfo):
* jit/JITInlines.h:
(JSC::JIT::callOperation):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/Repatch.cpp:
(JSC::appropriateGenericPutByIdFunction):
(JSC::appropriateListBuildingPutByIdFunction):
(JSC::resetPutByID):
2013-10-18 Oliver Hunt <oliver@apple.com> 2013-10-18 Oliver Hunt <oliver@apple.com>
Spread operator should be performing direct "puts" and not triggering setters Spread operator should be performing direct "puts" and not triggering setters
......
...@@ -52,6 +52,7 @@ ...@@ -52,6 +52,7 @@
#include "RepatchBuffer.h" #include "RepatchBuffer.h"
#include "SlotVisitorInlines.h" #include "SlotVisitorInlines.h"
#include <stdio.h> #include <stdio.h>
#include <wtf/BagToHashMap.h>
#include <wtf/CommaPrinter.h> #include <wtf/CommaPrinter.h>
#include <wtf/StringExtras.h> #include <wtf/StringExtras.h>
#include <wtf/StringPrintStream.h> #include <wtf/StringPrintStream.h>
...@@ -331,7 +332,7 @@ static void dumpChain(PrintStream& out, ExecState* exec, StructureChain* chain, ...@@ -331,7 +332,7 @@ static void dumpChain(PrintStream& out, ExecState* exec, StructureChain* chain,
} }
#endif #endif
void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int location) void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int location, const StubInfoMap& map)
{ {
Instruction* instruction = instructions().begin() + location; Instruction* instruction = instructions().begin() + location;
...@@ -350,8 +351,8 @@ void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int l ...@@ -350,8 +351,8 @@ void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int l
#endif #endif
#if ENABLE(JIT) #if ENABLE(JIT)
if (numberOfStructureStubInfos()) { if (StructureStubInfo* stubPtr = map.get(CodeOrigin(location))) {
StructureStubInfo& stubInfo = getStubInfo(location); StructureStubInfo& stubInfo = *stubPtr;
if (stubInfo.seen) { if (stubInfo.seen) {
out.printf(" jit("); out.printf(" jit(");
...@@ -520,11 +521,17 @@ void CodeBlock::dumpBytecode(PrintStream& out) ...@@ -520,11 +521,17 @@ void CodeBlock::dumpBytecode(PrintStream& out)
out.printf("; activation in r%d", activationRegister().offset()); out.printf("; activation in r%d", activationRegister().offset());
out.printf("\n"); out.printf("\n");
StubInfoMap stubInfos;
{
ConcurrentJITLocker locker(m_lock);
getStubInfoMap(locker, stubInfos);
}
const Instruction* begin = instructions().begin(); const Instruction* begin = instructions().begin();
const Instruction* end = instructions().end(); const Instruction* end = instructions().end();
for (const Instruction* it = begin; it != end; ++it) for (const Instruction* it = begin; it != end; ++it)
dumpBytecode(out, exec, begin, it); dumpBytecode(out, exec, begin, it, stubInfos);
if (numberOfIdentifiers()) { if (numberOfIdentifiers()) {
out.printf("\nIdentifiers:\n"); out.printf("\nIdentifiers:\n");
size_t i = 0; size_t i = 0;
...@@ -552,11 +559,6 @@ void CodeBlock::dumpBytecode(PrintStream& out) ...@@ -552,11 +559,6 @@ void CodeBlock::dumpBytecode(PrintStream& out)
} while (i < count); } while (i < count);
} }
#if ENABLE(JIT)
if (!m_structureStubInfos.isEmpty())
out.printf("\nStructures:\n");
#endif
if (m_rareData && !m_rareData->m_exceptionHandlers.isEmpty()) { if (m_rareData && !m_rareData->m_exceptionHandlers.isEmpty()) {
out.printf("\nException Handlers:\n"); out.printf("\nException Handlers:\n");
unsigned i = 0; unsigned i = 0;
...@@ -657,7 +659,7 @@ void CodeBlock::dumpRareCaseProfile(PrintStream& out, const char* name, RareCase ...@@ -657,7 +659,7 @@ void CodeBlock::dumpRareCaseProfile(PrintStream& out, const char* name, RareCase
} }
#endif #endif
void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instruction* begin, const Instruction*& it) void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instruction* begin, const Instruction*& it, const StubInfoMap& map)
{ {
int location = it - begin; int location = it - begin;
bool hasPrintedProfiling = false; bool hasPrintedProfiling = false;
...@@ -947,7 +949,7 @@ void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instructio ...@@ -947,7 +949,7 @@ void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instructio
case op_get_array_length: case op_get_array_length:
case op_get_string_length: { case op_get_string_length: {
printGetByIdOp(out, exec, location, it); printGetByIdOp(out, exec, location, it);
printGetByIdCacheStatus(out, exec, location); printGetByIdCacheStatus(out, exec, location, map);
dumpValueProfiling(out, it, hasPrintedProfiling); dumpValueProfiling(out, it, hasPrintedProfiling);
break; break;
} }
...@@ -1422,7 +1424,6 @@ static HashSet<CodeBlock*> liveCodeBlockSet; ...@@ -1422,7 +1424,6 @@ static HashSet<CodeBlock*> liveCodeBlockSet;
#define FOR_EACH_MEMBER_VECTOR(macro) \ #define FOR_EACH_MEMBER_VECTOR(macro) \
macro(instructions) \ macro(instructions) \
macro(structureStubInfos) \
macro(callLinkInfos) \ macro(callLinkInfos) \
macro(linkedCallerList) \ macro(linkedCallerList) \
macro(identifiers) \ macro(identifiers) \
...@@ -1945,8 +1946,8 @@ CodeBlock::~CodeBlock() ...@@ -1945,8 +1946,8 @@ CodeBlock::~CodeBlock()
// m_incomingCalls linked lists through the execution of the ~CallLinkInfo // m_incomingCalls linked lists through the execution of the ~CallLinkInfo
// destructors. // destructors.
for (size_t size = m_structureStubInfos.size(), i = 0; i < size; ++i) for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter)
m_structureStubInfos[i].deref(); (*iter)->deref();
#endif // ENABLE(JIT) #endif // ENABLE(JIT)
#if DUMP_CODE_BLOCK_STATISTICS #if DUMP_CODE_BLOCK_STATISTICS
...@@ -2101,8 +2102,8 @@ void CodeBlock::propagateTransitions(SlotVisitor& visitor) ...@@ -2101,8 +2102,8 @@ void CodeBlock::propagateTransitions(SlotVisitor& visitor)
#if ENABLE(JIT) #if ENABLE(JIT)
if (JITCode::isJIT(jitType())) { if (JITCode::isJIT(jitType())) {
for (unsigned i = 0; i < m_structureStubInfos.size(); ++i) { for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter) {
StructureStubInfo& stubInfo = m_structureStubInfos[i]; StructureStubInfo& stubInfo = **iter;
switch (stubInfo.accessType) { switch (stubInfo.accessType) {
case access_put_by_id_transition_normal: case access_put_by_id_transition_normal:
case access_put_by_id_transition_direct: { case access_put_by_id_transition_direct: {
...@@ -2358,8 +2359,8 @@ void CodeBlock::finalizeUnconditionally() ...@@ -2358,8 +2359,8 @@ void CodeBlock::finalizeUnconditionally()
&& !Heap::isMarked(callLinkInfo(i).lastSeenCallee.get())) && !Heap::isMarked(callLinkInfo(i).lastSeenCallee.get()))
callLinkInfo(i).lastSeenCallee.clear(); callLinkInfo(i).lastSeenCallee.clear();
} }
for (size_t size = m_structureStubInfos.size(), i = 0; i < size; ++i) { for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter) {
StructureStubInfo& stubInfo = m_structureStubInfos[i]; StructureStubInfo& stubInfo = **iter;
if (stubInfo.visitWeakReferences()) if (stubInfo.visitWeakReferences())
continue; continue;
...@@ -2371,6 +2372,17 @@ void CodeBlock::finalizeUnconditionally() ...@@ -2371,6 +2372,17 @@ void CodeBlock::finalizeUnconditionally()
} }
#if ENABLE(JIT) #if ENABLE(JIT)
StructureStubInfo* CodeBlock::addStubInfo()
{
ConcurrentJITLocker locker(m_lock);
return m_stubInfos.add();
}
void CodeBlock::getStubInfoMap(const ConcurrentJITLocker&, StubInfoMap& result)
{
toHashMap(m_stubInfos, getStructureStubInfoCodeOrigin, result);
}
void CodeBlock::resetStub(StructureStubInfo& stubInfo) void CodeBlock::resetStub(StructureStubInfo& stubInfo)
{ {
if (stubInfo.accessType == access_unset) if (stubInfo.accessType == access_unset)
...@@ -2593,7 +2605,6 @@ void CodeBlock::expressionRangeForBytecodeOffset(unsigned bytecodeOffset, int& d ...@@ -2593,7 +2605,6 @@ void CodeBlock::expressionRangeForBytecodeOffset(unsigned bytecodeOffset, int& d
void CodeBlock::shrinkToFit(ShrinkMode shrinkMode) void CodeBlock::shrinkToFit(ShrinkMode shrinkMode)
{ {
#if ENABLE(JIT) #if ENABLE(JIT)
m_structureStubInfos.shrinkToFit();
m_callLinkInfos.shrinkToFit(); m_callLinkInfos.shrinkToFit();
#endif #endif
#if ENABLE(VALUE_PROFILER) #if ENABLE(VALUE_PROFILER)
...@@ -3111,16 +3122,6 @@ void CodeBlock::setOptimizationThresholdBasedOnCompilationResult(CompilationResu ...@@ -3111,16 +3122,6 @@ void CodeBlock::setOptimizationThresholdBasedOnCompilationResult(CompilationResu
#endif #endif
static bool structureStubInfoLessThan(const StructureStubInfo& a, const StructureStubInfo& b)
{
return a.callReturnLocation.executableAddress() < b.callReturnLocation.executableAddress();
}
void CodeBlock::sortStructureStubInfos()
{
std::sort(m_structureStubInfos.begin(), m_structureStubInfos.end(), structureStubInfoLessThan);
}
uint32_t CodeBlock::adjustedExitCountThreshold(uint32_t desiredThreshold) uint32_t CodeBlock::adjustedExitCountThreshold(uint32_t desiredThreshold)
{ {
ASSERT(JITCode::isOptimizingJIT(jitType())); ASSERT(JITCode::isOptimizingJIT(jitType()));
......
...@@ -73,6 +73,7 @@ ...@@ -73,6 +73,7 @@
#include "ValueProfile.h" #include "ValueProfile.h"
#include "VirtualRegister.h" #include "VirtualRegister.h"
#include "Watchpoint.h" #include "Watchpoint.h"
#include <wtf/Bag.h>
#include <wtf/FastMalloc.h> #include <wtf/FastMalloc.h>
#include <wtf/PassOwnPtr.h> #include <wtf/PassOwnPtr.h>
#include <wtf/RefCountedArray.h> #include <wtf/RefCountedArray.h>
...@@ -170,18 +171,13 @@ public: ...@@ -170,18 +171,13 @@ public:
int& startOffset, int& endOffset, unsigned& line, unsigned& column); int& startOffset, int& endOffset, unsigned& line, unsigned& column);
#if ENABLE(JIT) #if ENABLE(JIT)
StructureStubInfo* addStubInfo();
StructureStubInfo& getStubInfo(ReturnAddressPtr returnAddress) Bag<StructureStubInfo>::iterator begin() { return m_stubInfos.begin(); }
{ Bag<StructureStubInfo>::iterator end() { return m_stubInfos.end(); }
return *(binarySearch<StructureStubInfo, void*>(m_structureStubInfos, m_structureStubInfos.size(), returnAddress.value(), getStructureStubInfoReturnLocation));
}
StructureStubInfo& getStubInfo(unsigned bytecodeIndex)
{
return *(binarySearch<StructureStubInfo, unsigned>(m_structureStubInfos, m_structureStubInfos.size(), bytecodeIndex, getStructureStubInfoBytecodeIndex));
}
void resetStub(StructureStubInfo&); void resetStub(StructureStubInfo&);
void getStubInfoMap(const ConcurrentJITLocker&, StubInfoMap& result);
ByValInfo& getByValInfo(unsigned bytecodeIndex) ByValInfo& getByValInfo(unsigned bytecodeIndex)
{ {
...@@ -377,11 +373,6 @@ public: ...@@ -377,11 +373,6 @@ public:
String nameForRegister(VirtualRegister); String nameForRegister(VirtualRegister);
#if ENABLE(JIT) #if ENABLE(JIT)
void setNumberOfStructureStubInfos(size_t size) { m_structureStubInfos.grow(size); }
void sortStructureStubInfos();
size_t numberOfStructureStubInfos() const { return m_structureStubInfos.size(); }
StructureStubInfo& structureStubInfo(int index) { return m_structureStubInfos[index]; }
void setNumberOfByValInfos(size_t size) { m_byValInfos.grow(size); } void setNumberOfByValInfos(size_t size) { m_byValInfos.grow(size); }
size_t numberOfByValInfos() const { return m_byValInfos.size(); } size_t numberOfByValInfos() const { return m_byValInfos.size(); }
ByValInfo& byValInfo(size_t index) { return m_byValInfos[index]; } ByValInfo& byValInfo(size_t index) { return m_byValInfos[index]; }
...@@ -445,8 +436,8 @@ public: ...@@ -445,8 +436,8 @@ public:
RareCaseProfile* rareCaseProfileForBytecodeOffset(int bytecodeOffset) RareCaseProfile* rareCaseProfileForBytecodeOffset(int bytecodeOffset)
{ {
return tryBinarySearch<RareCaseProfile, int>( return tryBinarySearch<RareCaseProfile, int>(
m_rareCaseProfiles, m_rareCaseProfiles.size(), bytecodeOffset, m_rareCaseProfiles, m_rareCaseProfiles.size(), bytecodeOffset,
getRareCaseProfileBytecodeOffset); getRareCaseProfileBytecodeOffset);
} }
bool likelyToTakeSlowCase(int bytecodeOffset) bool likelyToTakeSlowCase(int bytecodeOffset)
...@@ -935,14 +926,14 @@ private: ...@@ -935,14 +926,14 @@ private:
m_constantRegisters[i].set(*m_vm, ownerExecutable(), constants[i].get()); m_constantRegisters[i].set(*m_vm, ownerExecutable(), constants[i].get());
} }
void dumpBytecode(PrintStream&, ExecState*, const Instruction* begin, const Instruction*&); void dumpBytecode(PrintStream&, ExecState*, const Instruction* begin, const Instruction*&, const StubInfoMap& = StubInfoMap());
CString registerName(int r) const; CString registerName(int r) const;
void printUnaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op); void printUnaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
void printBinaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op); void printBinaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
void printConditionalJump(PrintStream&, ExecState*, const Instruction*, const Instruction*&, int location, const char* op); void printConditionalJump(PrintStream&, ExecState*, const Instruction*, const Instruction*&, int location, const char* op);
void printGetByIdOp(PrintStream&, ExecState*, int location, const Instruction*&); void printGetByIdOp(PrintStream&, ExecState*, int location, const Instruction*&);
void printGetByIdCacheStatus(PrintStream&, ExecState*, int location); void printGetByIdCacheStatus(PrintStream&, ExecState*, int location, const StubInfoMap&);
enum CacheDumpMode { DumpCaches, DontDumpCaches }; enum CacheDumpMode { DumpCaches, DontDumpCaches };
void printCallOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op, CacheDumpMode, bool& hasPrintedProfiling); void printCallOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op, CacheDumpMode, bool& hasPrintedProfiling);
void printPutByIdOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op); void printPutByIdOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
...@@ -1031,7 +1022,7 @@ private: ...@@ -1031,7 +1022,7 @@ private:
RefPtr<JITCode> m_jitCode; RefPtr<JITCode> m_jitCode;
MacroAssemblerCodePtr m_jitCodeWithArityCheck; MacroAssemblerCodePtr m_jitCodeWithArityCheck;
#if ENABLE(JIT) #if ENABLE(JIT)
Vector<StructureStubInfo> m_structureStubInfos; Bag<StructureStubInfo> m_stubInfos;
Vector<ByValInfo> m_byValInfos; Vector<ByValInfo> m_byValInfos;
Vector<CallLinkInfo> m_callLinkInfos; Vector<CallLinkInfo> m_callLinkInfos;
SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo>> m_incomingCalls; SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo>> m_incomingCalls;
......
...@@ -32,6 +32,7 @@ ...@@ -32,6 +32,7 @@
#include "ValueRecovery.h" #include "ValueRecovery.h"
#include "WriteBarrier.h" #include "WriteBarrier.h"
#include <wtf/BitVector.h> #include <wtf/BitVector.h>
#include <wtf/HashMap.h>
#include <wtf/PrintStream.h> #include <wtf/PrintStream.h>
#include <wtf/StdLibExtras.h> #include <wtf/StdLibExtras.h>
#include <wtf/Vector.h> #include <wtf/Vector.h>
...@@ -60,6 +61,12 @@ struct CodeOrigin { ...@@ -60,6 +61,12 @@ struct CodeOrigin {
{ {
} }
CodeOrigin(WTF::HashTableDeletedValueType)
: bytecodeIndex(invalidBytecodeIndex)
, inlineCallFrame(bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(1)))
{
}
explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0) explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0)
: bytecodeIndex(bytecodeIndex) : bytecodeIndex(bytecodeIndex)
, inlineCallFrame(inlineCallFrame) , inlineCallFrame(inlineCallFrame)
...@@ -69,6 +76,11 @@ struct CodeOrigin { ...@@ -69,6 +76,11 @@ struct CodeOrigin {
bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; } bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; }
bool isHashTableDeletedValue() const
{
return bytecodeIndex == invalidBytecodeIndex && !!inlineCallFrame;
}
// The inline depth is the depth of the inline stack, so 1 = not inlined, // The inline depth is the depth of the inline stack, so 1 = not inlined,
// 2 = inlined one deep, etc. // 2 = inlined one deep, etc.
unsigned inlineDepth() const; unsigned inlineDepth() const;
...@@ -81,8 +93,8 @@ struct CodeOrigin { ...@@ -81,8 +93,8 @@ struct CodeOrigin {
static unsigned inlineDepthForCallFrame(InlineCallFrame*); static unsigned inlineDepthForCallFrame(InlineCallFrame*);
unsigned hash() const;
bool operator==(const CodeOrigin& other) const; bool operator==(const CodeOrigin& other) const;
bool operator!=(const CodeOrigin& other) const { return !(*this == other); } bool operator!=(const CodeOrigin& other) const { return !(*this == other); }
// Get the inline stack. This is slow, and is intended for debugging only. // Get the inline stack. This is slow, and is intended for debugging only.
...@@ -145,6 +157,12 @@ inline int CodeOrigin::stackOffset() const ...@@ -145,6 +157,12 @@ inline int CodeOrigin::stackOffset() const
return inlineCallFrame->stackOffset; return inlineCallFrame->stackOffset;
} }
inline unsigned CodeOrigin::hash() const
{
return WTF::IntHash<unsigned>::hash(bytecodeIndex) +
WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame);
}
inline bool CodeOrigin::operator==(const CodeOrigin& other) const inline bool CodeOrigin::operator==(const CodeOrigin& other) const
{ {
return bytecodeIndex == other.bytecodeIndex return bytecodeIndex == other.bytecodeIndex
...@@ -158,7 +176,27 @@ inline ScriptExecutable* CodeOrigin::codeOriginOwner() const ...@@ -158,7 +176,27 @@ inline ScriptExecutable* CodeOrigin::codeOriginOwner() const
return inlineCallFrame->executable.get(); return inlineCallFrame->executable.get();
} }
struct CodeOriginHash {
static unsigned hash(const CodeOrigin& key) { return key.hash(); }
static bool equal(const CodeOrigin& a, const CodeOrigin& b) { return a == b; }
static const bool safeToCompareToEmptyOrDeleted = true;
};
} // namespace JSC } // namespace JSC
namespace WTF {
template<typename T> struct DefaultHash;
template<> struct DefaultHash<JSC::CodeOrigin> {
typedef JSC::CodeOriginHash Hash;
};
template<typename T> struct HashTraits;
template<> struct HashTraits<JSC::CodeOrigin> : SimpleClassHashTraits<JSC::CodeOrigin> {
static const bool emptyValueIsZero = false;
};
} // namespace WTF
#endif // CodeOrigin_h #endif // CodeOrigin_h
...@@ -105,7 +105,7 @@ void GetByIdStatus::computeForChain(GetByIdStatus& result, CodeBlock* profiledBl ...@@ -105,7 +105,7 @@ void GetByIdStatus::computeForChain(GetByIdStatus& result, CodeBlock* profiledBl
#endif #endif
} }
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, StringImpl* uid) GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, StubInfoMap& map, unsigned bytecodeIndex, StringImpl* uid)
{ {
ConcurrentJITLocker locker(profiledBlock->m_lock); ConcurrentJITLocker locker(profiledBlock->m_lock);
...@@ -113,28 +113,23 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec ...@@ -113,28 +113,23 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
UNUSED_PARAM(bytecodeIndex); UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(uid); UNUSED_PARAM(uid);
#if ENABLE(JIT) && ENABLE(VALUE_PROFILER) #if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
if (!profiledBlock->hasBaselineJITProfiling()) StructureStubInfo* stubInfo = map.get(CodeOrigin(bytecodeIndex));
if (!stubInfo || !stubInfo->seen)
return computeFromLLInt(profiledBlock, bytecodeIndex, uid); return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
// First check if it makes either calls, in which case we want to be super careful, or if (stubInfo->resetByGC)
// if it's not set at all, in which case we punt.
StructureStubInfo& stubInfo = profiledBlock->getStubInfo(bytecodeIndex);
if (!stubInfo.seen)
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
if (stubInfo.resetByGC)
return GetByIdStatus(TakesSlowPath, true); return GetByIdStatus(TakesSlowPath, true);
PolymorphicAccessStructureList* list; PolymorphicAccessStructureList* list;
int listSize; int listSize;
switch (stubInfo.accessType) { switch (stubInfo->accessType) {
case access_get_by_id_self_list: case access_get_by_id_self_list:
list = stubInfo.u.getByIdSelfList.structureList; list = stubInfo->u.getByIdSelfList.structureList;
listSize = stubInfo.u.getByIdSelfList.listSize; listSize = stubInfo->u.getByIdSelfList.listSize;
break; break;
case access_get_by_id_proto_list: case access_get_by_id_proto_list:
list = stubInfo.u.getByIdProtoList.structureList; list = stubInfo->u.getByIdProtoList.structureList;
listSize = stubInfo.u.getByIdProtoList.listSize; listSize = stubInfo->u.getByIdProtoList.listSize;
break; break;
default: default:
list = 0; list = 0;
...@@ -153,12 +148,12 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec ...@@ -153,12 +148,12 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
// Finally figure out if we can derive an access strategy. // Finally figure out if we can derive an access strategy.
GetByIdStatus result; GetByIdStatus result;
result.m_wasSeenInJIT = true; // This is interesting for bytecode dumping only. result.m_wasSeenInJIT = true; // This is interesting for bytecode dumping only.
switch (stubInfo.accessType) { switch (stubInfo->accessType) {
case access_unset: case access_unset:
return computeFromLLInt(profiledBlock, bytecodeIndex, uid); return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
case access_get_by_id_self: { case access_get_by_id_self: {
Structure* structure = stubInfo.u.getByIdSelf.baseObjectStructure.get(); Structure* structure = stubInfo->u.getByIdSelf.baseObjectStructure.get();
unsigned attributesIgnored; unsigned attributesIgnored;
JSCell* specificValue; JSCell* specificValue;
result.m_offset = structure->getConcurrently( result.m_offset = structure->getConcurrently(
...@@ -214,24 +209,24 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec ...@@ -214,24 +209,24 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
} }
case access_get_by_id_proto: { case access_get_by_id_proto: {
if (!stubInfo.u.getByIdProto.isDirect) if (!stubInfo->u.getByIdProto.isDirect)
return GetByIdStatus(MakesCalls, true); return GetByIdStatus(MakesCalls, true);
result.m_chain = adoptRef(new IntendedStructureChain( result.m_chain = adoptRef(new IntendedStructureChain(
profiledBlock, profiledBlock,
stubInfo.u.getByIdProto.baseObjectStructure.get(), stubInfo->u.getByIdProto.baseObjectStructure.get(),
stubInfo.u.getByIdProto.prototypeStructure.get())); stubInfo->u.getByIdProto.prototypeStructure.get()));
computeForChain(result, profiledBlock, uid); computeForChain(result, profiledBlock, uid);
break; break;
} }
case access_get_by_id_chain: { case access_get_by_id_chain: {