Commit d49bfe80 authored by fpizlo@apple.com's avatar fpizlo@apple.com

A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using...

A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using code origins and machine code PCs
https://bugs.webkit.org/show_bug.cgi?id=122940

Source/JavaScriptCore: 

Reviewed by Oliver Hunt.
        
This accomplishes a number of simplifications. StructureStubInfo is now non-moving,
whereas previously it was in a Vector, so it moved. This allows you to use pointers to
StructureStubInfo. This also eliminates the use of return PC as a way of finding the
StructureStubInfo's. It removes some of the need for the compile-time property access
records; for example the DFG no longer has to save information about registers in a
property access record only to later save it to the stub info.
        
The main thing is accomplishes is that it makes it easier to add StructureStubInfo's
at any stage of compilation.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::addStubInfo):
(JSC::CodeBlock::getStubInfoMap):
(JSC::CodeBlock::shrinkToFit):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::begin):
(JSC::CodeBlock::end):
(JSC::CodeBlock::rareCaseProfileForBytecodeOffset):
* bytecode/CodeOrigin.h:
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::isHashTableDeletedValue):
(JSC::CodeOrigin::hash):
(JSC::CodeOriginHash::hash):
(JSC::CodeOriginHash::equal):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/GetByIdStatus.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/PutByIdStatus.h:
* bytecode/StructureStubInfo.h:
(JSC::getStructureStubInfoCodeOrigin):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::PropertyAccessRecord::PropertyAccessRecord):
(JSC::DFG::InRecord::InRecord):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileIn):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/JIT.cpp:
(JSC::PropertyStubCompilationInfo::copyToStubInfo):
(JSC::JIT::privateCompile):
* jit/JIT.h:
(JSC::PropertyStubCompilationInfo::slowCaseInfo):
* jit/JITInlines.h:
(JSC::JIT::callOperation):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/Repatch.cpp:
(JSC::appropriateGenericPutByIdFunction):
(JSC::appropriateListBuildingPutByIdFunction):
(JSC::resetPutByID):

Source/WTF: 

Reviewed by Oliver Hunt.

* GNUmakefile.list.am:
* WTF.vcxproj/WTF.vcxproj:
* WTF.xcodeproj/project.pbxproj:
* wtf/BagToHashMap.h: Added.
(WTF::toHashMap):
* wtf/CMakeLists.txt:



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@157660 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 5a650be7
2013-10-18 Filip Pizlo <fpizlo@apple.com>
A CodeBlock's StructureStubInfos shouldn't be in a Vector that we search using code origins and machine code PCs
https://bugs.webkit.org/show_bug.cgi?id=122940
Reviewed by Oliver Hunt.
This accomplishes a number of simplifications. StructureStubInfo is now non-moving,
whereas previously it was in a Vector, so it moved. This allows you to use pointers to
StructureStubInfo. This also eliminates the use of return PC as a way of finding the
StructureStubInfo's. It removes some of the need for the compile-time property access
records; for example the DFG no longer has to save information about registers in a
property access record only to later save it to the stub info.
The main thing is accomplishes is that it makes it easier to add StructureStubInfo's
at any stage of compilation.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::addStubInfo):
(JSC::CodeBlock::getStubInfoMap):
(JSC::CodeBlock::shrinkToFit):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::begin):
(JSC::CodeBlock::end):
(JSC::CodeBlock::rareCaseProfileForBytecodeOffset):
* bytecode/CodeOrigin.h:
(JSC::CodeOrigin::CodeOrigin):
(JSC::CodeOrigin::isHashTableDeletedValue):
(JSC::CodeOrigin::hash):
(JSC::CodeOriginHash::hash):
(JSC::CodeOriginHash::equal):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/GetByIdStatus.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/PutByIdStatus.h:
* bytecode/StructureStubInfo.h:
(JSC::getStructureStubInfoCodeOrigin):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::PropertyAccessRecord::PropertyAccessRecord):
(JSC::DFG::InRecord::InRecord):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileIn):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
(JSC::DFG::SpeculativeJIT::cachedPutById):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/JIT.cpp:
(JSC::PropertyStubCompilationInfo::copyToStubInfo):
(JSC::JIT::privateCompile):
* jit/JIT.h:
(JSC::PropertyStubCompilationInfo::slowCaseInfo):
* jit/JITInlines.h:
(JSC::JIT::callOperation):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_put_by_id):
* jit/Repatch.cpp:
(JSC::appropriateGenericPutByIdFunction):
(JSC::appropriateListBuildingPutByIdFunction):
(JSC::resetPutByID):
2013-10-18 Oliver Hunt <oliver@apple.com>
Spread operator should be performing direct "puts" and not triggering setters
......
......@@ -52,6 +52,7 @@
#include "RepatchBuffer.h"
#include "SlotVisitorInlines.h"
#include <stdio.h>
#include <wtf/BagToHashMap.h>
#include <wtf/CommaPrinter.h>
#include <wtf/StringExtras.h>
#include <wtf/StringPrintStream.h>
......@@ -331,7 +332,7 @@ static void dumpChain(PrintStream& out, ExecState* exec, StructureChain* chain,
}
#endif
void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int location)
void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int location, const StubInfoMap& map)
{
Instruction* instruction = instructions().begin() + location;
......@@ -350,8 +351,8 @@ void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int l
#endif
#if ENABLE(JIT)
if (numberOfStructureStubInfos()) {
StructureStubInfo& stubInfo = getStubInfo(location);
if (StructureStubInfo* stubPtr = map.get(CodeOrigin(location))) {
StructureStubInfo& stubInfo = *stubPtr;
if (stubInfo.seen) {
out.printf(" jit(");
......@@ -520,11 +521,17 @@ void CodeBlock::dumpBytecode(PrintStream& out)
out.printf("; activation in r%d", activationRegister().offset());
out.printf("\n");
StubInfoMap stubInfos;
{
ConcurrentJITLocker locker(m_lock);
getStubInfoMap(locker, stubInfos);
}
const Instruction* begin = instructions().begin();
const Instruction* end = instructions().end();
for (const Instruction* it = begin; it != end; ++it)
dumpBytecode(out, exec, begin, it);
dumpBytecode(out, exec, begin, it, stubInfos);
if (numberOfIdentifiers()) {
out.printf("\nIdentifiers:\n");
size_t i = 0;
......@@ -552,11 +559,6 @@ void CodeBlock::dumpBytecode(PrintStream& out)
} while (i < count);
}
#if ENABLE(JIT)
if (!m_structureStubInfos.isEmpty())
out.printf("\nStructures:\n");
#endif
if (m_rareData && !m_rareData->m_exceptionHandlers.isEmpty()) {
out.printf("\nException Handlers:\n");
unsigned i = 0;
......@@ -657,7 +659,7 @@ void CodeBlock::dumpRareCaseProfile(PrintStream& out, const char* name, RareCase
}
#endif
void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instruction* begin, const Instruction*& it)
void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instruction* begin, const Instruction*& it, const StubInfoMap& map)
{
int location = it - begin;
bool hasPrintedProfiling = false;
......@@ -947,7 +949,7 @@ void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instructio
case op_get_array_length:
case op_get_string_length: {
printGetByIdOp(out, exec, location, it);
printGetByIdCacheStatus(out, exec, location);
printGetByIdCacheStatus(out, exec, location, map);
dumpValueProfiling(out, it, hasPrintedProfiling);
break;
}
......@@ -1422,7 +1424,6 @@ static HashSet<CodeBlock*> liveCodeBlockSet;
#define FOR_EACH_MEMBER_VECTOR(macro) \
macro(instructions) \
macro(structureStubInfos) \
macro(callLinkInfos) \
macro(linkedCallerList) \
macro(identifiers) \
......@@ -1945,8 +1946,8 @@ CodeBlock::~CodeBlock()
// m_incomingCalls linked lists through the execution of the ~CallLinkInfo
// destructors.
for (size_t size = m_structureStubInfos.size(), i = 0; i < size; ++i)
m_structureStubInfos[i].deref();
for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter)
(*iter)->deref();
#endif // ENABLE(JIT)
#if DUMP_CODE_BLOCK_STATISTICS
......@@ -2101,8 +2102,8 @@ void CodeBlock::propagateTransitions(SlotVisitor& visitor)
#if ENABLE(JIT)
if (JITCode::isJIT(jitType())) {
for (unsigned i = 0; i < m_structureStubInfos.size(); ++i) {
StructureStubInfo& stubInfo = m_structureStubInfos[i];
for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter) {
StructureStubInfo& stubInfo = **iter;
switch (stubInfo.accessType) {
case access_put_by_id_transition_normal:
case access_put_by_id_transition_direct: {
......@@ -2358,8 +2359,8 @@ void CodeBlock::finalizeUnconditionally()
&& !Heap::isMarked(callLinkInfo(i).lastSeenCallee.get()))
callLinkInfo(i).lastSeenCallee.clear();
}
for (size_t size = m_structureStubInfos.size(), i = 0; i < size; ++i) {
StructureStubInfo& stubInfo = m_structureStubInfos[i];
for (Bag<StructureStubInfo>::iterator iter = m_stubInfos.begin(); !!iter; ++iter) {
StructureStubInfo& stubInfo = **iter;
if (stubInfo.visitWeakReferences())
continue;
......@@ -2371,6 +2372,17 @@ void CodeBlock::finalizeUnconditionally()
}
#if ENABLE(JIT)
StructureStubInfo* CodeBlock::addStubInfo()
{
ConcurrentJITLocker locker(m_lock);
return m_stubInfos.add();
}
void CodeBlock::getStubInfoMap(const ConcurrentJITLocker&, StubInfoMap& result)
{
toHashMap(m_stubInfos, getStructureStubInfoCodeOrigin, result);
}
void CodeBlock::resetStub(StructureStubInfo& stubInfo)
{
if (stubInfo.accessType == access_unset)
......@@ -2593,7 +2605,6 @@ void CodeBlock::expressionRangeForBytecodeOffset(unsigned bytecodeOffset, int& d
void CodeBlock::shrinkToFit(ShrinkMode shrinkMode)
{
#if ENABLE(JIT)
m_structureStubInfos.shrinkToFit();
m_callLinkInfos.shrinkToFit();
#endif
#if ENABLE(VALUE_PROFILER)
......@@ -3111,16 +3122,6 @@ void CodeBlock::setOptimizationThresholdBasedOnCompilationResult(CompilationResu
#endif
static bool structureStubInfoLessThan(const StructureStubInfo& a, const StructureStubInfo& b)
{
return a.callReturnLocation.executableAddress() < b.callReturnLocation.executableAddress();
}
void CodeBlock::sortStructureStubInfos()
{
std::sort(m_structureStubInfos.begin(), m_structureStubInfos.end(), structureStubInfoLessThan);
}
uint32_t CodeBlock::adjustedExitCountThreshold(uint32_t desiredThreshold)
{
ASSERT(JITCode::isOptimizingJIT(jitType()));
......
......@@ -73,6 +73,7 @@
#include "ValueProfile.h"
#include "VirtualRegister.h"
#include "Watchpoint.h"
#include <wtf/Bag.h>
#include <wtf/FastMalloc.h>
#include <wtf/PassOwnPtr.h>
#include <wtf/RefCountedArray.h>
......@@ -170,18 +171,13 @@ public:
int& startOffset, int& endOffset, unsigned& line, unsigned& column);
#if ENABLE(JIT)
StructureStubInfo& getStubInfo(ReturnAddressPtr returnAddress)
{
return *(binarySearch<StructureStubInfo, void*>(m_structureStubInfos, m_structureStubInfos.size(), returnAddress.value(), getStructureStubInfoReturnLocation));
}
StructureStubInfo& getStubInfo(unsigned bytecodeIndex)
{
return *(binarySearch<StructureStubInfo, unsigned>(m_structureStubInfos, m_structureStubInfos.size(), bytecodeIndex, getStructureStubInfoBytecodeIndex));
}
StructureStubInfo* addStubInfo();
Bag<StructureStubInfo>::iterator begin() { return m_stubInfos.begin(); }
Bag<StructureStubInfo>::iterator end() { return m_stubInfos.end(); }
void resetStub(StructureStubInfo&);
void getStubInfoMap(const ConcurrentJITLocker&, StubInfoMap& result);
ByValInfo& getByValInfo(unsigned bytecodeIndex)
{
......@@ -377,11 +373,6 @@ public:
String nameForRegister(VirtualRegister);
#if ENABLE(JIT)
void setNumberOfStructureStubInfos(size_t size) { m_structureStubInfos.grow(size); }
void sortStructureStubInfos();
size_t numberOfStructureStubInfos() const { return m_structureStubInfos.size(); }
StructureStubInfo& structureStubInfo(int index) { return m_structureStubInfos[index]; }
void setNumberOfByValInfos(size_t size) { m_byValInfos.grow(size); }
size_t numberOfByValInfos() const { return m_byValInfos.size(); }
ByValInfo& byValInfo(size_t index) { return m_byValInfos[index]; }
......@@ -445,8 +436,8 @@ public:
RareCaseProfile* rareCaseProfileForBytecodeOffset(int bytecodeOffset)
{
return tryBinarySearch<RareCaseProfile, int>(
m_rareCaseProfiles, m_rareCaseProfiles.size(), bytecodeOffset,
getRareCaseProfileBytecodeOffset);
m_rareCaseProfiles, m_rareCaseProfiles.size(), bytecodeOffset,
getRareCaseProfileBytecodeOffset);
}
bool likelyToTakeSlowCase(int bytecodeOffset)
......@@ -935,14 +926,14 @@ private:
m_constantRegisters[i].set(*m_vm, ownerExecutable(), constants[i].get());
}
void dumpBytecode(PrintStream&, ExecState*, const Instruction* begin, const Instruction*&);
void dumpBytecode(PrintStream&, ExecState*, const Instruction* begin, const Instruction*&, const StubInfoMap& = StubInfoMap());
CString registerName(int r) const;
void printUnaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
void printBinaryOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
void printConditionalJump(PrintStream&, ExecState*, const Instruction*, const Instruction*&, int location, const char* op);
void printGetByIdOp(PrintStream&, ExecState*, int location, const Instruction*&);
void printGetByIdCacheStatus(PrintStream&, ExecState*, int location);
void printGetByIdCacheStatus(PrintStream&, ExecState*, int location, const StubInfoMap&);
enum CacheDumpMode { DumpCaches, DontDumpCaches };
void printCallOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op, CacheDumpMode, bool& hasPrintedProfiling);
void printPutByIdOp(PrintStream&, ExecState*, int location, const Instruction*&, const char* op);
......@@ -1031,7 +1022,7 @@ private:
RefPtr<JITCode> m_jitCode;
MacroAssemblerCodePtr m_jitCodeWithArityCheck;
#if ENABLE(JIT)
Vector<StructureStubInfo> m_structureStubInfos;
Bag<StructureStubInfo> m_stubInfos;
Vector<ByValInfo> m_byValInfos;
Vector<CallLinkInfo> m_callLinkInfos;
SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo>> m_incomingCalls;
......
......@@ -32,6 +32,7 @@
#include "ValueRecovery.h"
#include "WriteBarrier.h"
#include <wtf/BitVector.h>
#include <wtf/HashMap.h>
#include <wtf/PrintStream.h>
#include <wtf/StdLibExtras.h>
#include <wtf/Vector.h>
......@@ -60,6 +61,12 @@ struct CodeOrigin {
{
}
CodeOrigin(WTF::HashTableDeletedValueType)
: bytecodeIndex(invalidBytecodeIndex)
, inlineCallFrame(bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(1)))
{
}
explicit CodeOrigin(unsigned bytecodeIndex, InlineCallFrame* inlineCallFrame = 0)
: bytecodeIndex(bytecodeIndex)
, inlineCallFrame(inlineCallFrame)
......@@ -69,6 +76,11 @@ struct CodeOrigin {
bool isSet() const { return bytecodeIndex != invalidBytecodeIndex; }
bool isHashTableDeletedValue() const
{
return bytecodeIndex == invalidBytecodeIndex && !!inlineCallFrame;
}
// The inline depth is the depth of the inline stack, so 1 = not inlined,
// 2 = inlined one deep, etc.
unsigned inlineDepth() const;
......@@ -81,8 +93,8 @@ struct CodeOrigin {
static unsigned inlineDepthForCallFrame(InlineCallFrame*);
unsigned hash() const;
bool operator==(const CodeOrigin& other) const;
bool operator!=(const CodeOrigin& other) const { return !(*this == other); }
// Get the inline stack. This is slow, and is intended for debugging only.
......@@ -145,6 +157,12 @@ inline int CodeOrigin::stackOffset() const
return inlineCallFrame->stackOffset;
}
inline unsigned CodeOrigin::hash() const
{
return WTF::IntHash<unsigned>::hash(bytecodeIndex) +
WTF::PtrHash<InlineCallFrame*>::hash(inlineCallFrame);
}
inline bool CodeOrigin::operator==(const CodeOrigin& other) const
{
return bytecodeIndex == other.bytecodeIndex
......@@ -158,7 +176,27 @@ inline ScriptExecutable* CodeOrigin::codeOriginOwner() const
return inlineCallFrame->executable.get();
}
struct CodeOriginHash {
static unsigned hash(const CodeOrigin& key) { return key.hash(); }
static bool equal(const CodeOrigin& a, const CodeOrigin& b) { return a == b; }
static const bool safeToCompareToEmptyOrDeleted = true;
};
} // namespace JSC
namespace WTF {
template<typename T> struct DefaultHash;
template<> struct DefaultHash<JSC::CodeOrigin> {
typedef JSC::CodeOriginHash Hash;
};
template<typename T> struct HashTraits;
template<> struct HashTraits<JSC::CodeOrigin> : SimpleClassHashTraits<JSC::CodeOrigin> {
static const bool emptyValueIsZero = false;
};
} // namespace WTF
#endif // CodeOrigin_h
......@@ -105,7 +105,7 @@ void GetByIdStatus::computeForChain(GetByIdStatus& result, CodeBlock* profiledBl
#endif
}
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, StringImpl* uid)
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, StubInfoMap& map, unsigned bytecodeIndex, StringImpl* uid)
{
ConcurrentJITLocker locker(profiledBlock->m_lock);
......@@ -113,28 +113,23 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(uid);
#if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
if (!profiledBlock->hasBaselineJITProfiling())
StructureStubInfo* stubInfo = map.get(CodeOrigin(bytecodeIndex));
if (!stubInfo || !stubInfo->seen)
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
// First check if it makes either calls, in which case we want to be super careful, or
// if it's not set at all, in which case we punt.
StructureStubInfo& stubInfo = profiledBlock->getStubInfo(bytecodeIndex);
if (!stubInfo.seen)
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
if (stubInfo.resetByGC)
if (stubInfo->resetByGC)
return GetByIdStatus(TakesSlowPath, true);
PolymorphicAccessStructureList* list;
int listSize;
switch (stubInfo.accessType) {
switch (stubInfo->accessType) {
case access_get_by_id_self_list:
list = stubInfo.u.getByIdSelfList.structureList;
listSize = stubInfo.u.getByIdSelfList.listSize;
list = stubInfo->u.getByIdSelfList.structureList;
listSize = stubInfo->u.getByIdSelfList.listSize;
break;
case access_get_by_id_proto_list:
list = stubInfo.u.getByIdProtoList.structureList;
listSize = stubInfo.u.getByIdProtoList.listSize;
list = stubInfo->u.getByIdProtoList.structureList;
listSize = stubInfo->u.getByIdProtoList.listSize;
break;
default:
list = 0;
......@@ -153,12 +148,12 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
// Finally figure out if we can derive an access strategy.
GetByIdStatus result;
result.m_wasSeenInJIT = true; // This is interesting for bytecode dumping only.
switch (stubInfo.accessType) {
switch (stubInfo->accessType) {
case access_unset:
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
case access_get_by_id_self: {
Structure* structure = stubInfo.u.getByIdSelf.baseObjectStructure.get();
Structure* structure = stubInfo->u.getByIdSelf.baseObjectStructure.get();
unsigned attributesIgnored;
JSCell* specificValue;
result.m_offset = structure->getConcurrently(
......@@ -214,24 +209,24 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
}
case access_get_by_id_proto: {
if (!stubInfo.u.getByIdProto.isDirect)
if (!stubInfo->u.getByIdProto.isDirect)
return GetByIdStatus(MakesCalls, true);
result.m_chain = adoptRef(new IntendedStructureChain(
profiledBlock,
stubInfo.u.getByIdProto.baseObjectStructure.get(),
stubInfo.u.getByIdProto.prototypeStructure.get()));
stubInfo->u.getByIdProto.baseObjectStructure.get(),
stubInfo->u.getByIdProto.prototypeStructure.get()));
computeForChain(result, profiledBlock, uid);
break;
}
case access_get_by_id_chain: {
if (!stubInfo.u.getByIdChain.isDirect)
if (!stubInfo->u.getByIdChain.isDirect)
return GetByIdStatus(MakesCalls, true);
result.m_chain = adoptRef(new IntendedStructureChain(
profiledBlock,
stubInfo.u.getByIdChain.baseObjectStructure.get(),
stubInfo.u.getByIdChain.chain.get(),
stubInfo.u.getByIdChain.count));
stubInfo->u.getByIdChain.baseObjectStructure.get(),
stubInfo->u.getByIdChain.chain.get(),
stubInfo->u.getByIdChain.count));
computeForChain(result, profiledBlock, uid);
break;
}
......
......@@ -29,6 +29,7 @@
#include "IntendedStructureChain.h"
#include "PropertyOffset.h"
#include "StructureSet.h"
#include "StructureStubInfo.h"
namespace JSC {
......@@ -70,7 +71,7 @@ public:
ASSERT((state == Simple) == (offset != invalidOffset));
}
static GetByIdStatus computeFor(CodeBlock*, unsigned bytecodeIndex, StringImpl* uid);
static GetByIdStatus computeFor(CodeBlock*, StubInfoMap&, unsigned bytecodeIndex, StringImpl* uid);
static GetByIdStatus computeFor(VM&, Structure*, StringImpl* uid);
State state() const { return m_state; }
......
......@@ -81,7 +81,7 @@ PutByIdStatus PutByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned
#endif
}
PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, StringImpl* uid)
PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, StubInfoMap& map, unsigned bytecodeIndex, StringImpl* uid)
{
ConcurrentJITLocker locker(profiledBlock->m_lock);
......@@ -89,32 +89,29 @@ PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(uid);
#if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
if (!profiledBlock->hasBaselineJITProfiling())
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
if (profiledBlock->likelyToTakeSlowCase(bytecodeIndex))
return PutByIdStatus(TakesSlowPath, 0, 0, 0, invalidOffset);
StructureStubInfo& stubInfo = profiledBlock->getStubInfo(bytecodeIndex);
if (!stubInfo.seen)
StructureStubInfo* stubInfo = map.get(CodeOrigin(bytecodeIndex));
if (!stubInfo || !stubInfo->seen)
return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
if (stubInfo.resetByGC)
if (stubInfo->resetByGC)
return PutByIdStatus(TakesSlowPath, 0, 0, 0, invalidOffset);
switch (stubInfo.accessType) {
switch (stubInfo->accessType) {
case access_unset:
// If the JIT saw it but didn't optimize it, then assume that this takes slow path.
return PutByIdStatus(TakesSlowPath, 0, 0, 0, invalidOffset);
case access_put_by_id_replace: {
PropertyOffset offset =
stubInfo.u.putByIdReplace.baseObjectStructure->getConcurrently(
stubInfo->u.putByIdReplace.baseObjectStructure->getConcurrently(
*profiledBlock->vm(), uid);
if (isValidOffset(offset)) {
return PutByIdStatus(
SimpleReplace,
stubInfo.u.putByIdReplace.baseObjectStructure.get(),
stubInfo->u.putByIdReplace.baseObjectStructure.get(),
0, 0,
offset);
}
......@@ -123,18 +120,18 @@ PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytec
case access_put_by_id_transition_normal:
case access_put_by_id_transition_direct: {
ASSERT(stubInfo.u.putByIdTransition.previousStructure->transitionWatchpointSetHasBeenInvalidated());
ASSERT(stubInfo->u.putByIdTransition.previousStructure->transitionWatchpointSetHasBeenInvalidated());
PropertyOffset offset =
stubInfo.u.putByIdTransition.structure->getConcurrently(
stubInfo->u.putByIdTransition.structure->getConcurrently(
*profiledBlock->vm(), uid);
if (isValidOffset(offset)) {
return PutByIdStatus(
SimpleTransition,
stubInfo.u.putByIdTransition.previousStructure.get(),
stubInfo.u.putByIdTransition.structure.get(),
stubInfo.u.putByIdTransition.chain ? adoptRef(new IntendedStructureChain(
profiledBlock, stubInfo.u.putByIdTransition.previousStructure.get(),
stubInfo.u.putByIdTransition.chain.get())) : 0,
stubInfo->u.putByIdTransition.previousStructure.get(),
stubInfo->u.putByIdTransition.structure.get(),
stubInfo->u.putByIdTransition.chain ? adoptRef(new IntendedStructureChain(
profiledBlock, stubInfo->u.putByIdTransition.previousStructure.get(),
stubInfo->u.putByIdTransition.chain.get())) : 0,
offset);
}
return PutByIdStatus(TakesSlowPath, 0, 0, 0, invalidOffset);
......
......@@ -28,6 +28,7 @@
#include "IntendedStructureChain.h"
#include "PropertyOffset.h"
#include "StructureStubInfo.h"
#include <wtf/text/StringImpl.h>
namespace JSC {
......@@ -90,7 +91,7 @@ public:
ASSERT((m_state == NoInformation || m_state == TakesSlowPath) == (m_offset == invalidOffset));
}
static PutByIdStatus computeFor(CodeBlock*, unsigned bytecodeIndex, StringImpl* uid);
static PutByIdStatus computeFor(CodeBlock*, StubInfoMap&, unsigned bytecodeIndex, StringImpl* uid);
static PutByIdStatus computeFor(VM&, JSGlobalObject*, Structure*, StringImpl* uid, bool isDirect);
State state() const { return m_state; }
......
/*
* Copyright (C) 2008, 2012 Apple Inc. All rights reserved.
* Copyright (C) 2008, 2012, 2013 Apple Inc. All rights reserved.
*