Commit 0c1b13e9 authored by oliver@apple.com's avatar oliver@apple.com

fourthTier: all inline caches should thread-safe enough to allow a concurrent...

fourthTier: all inline caches should thread-safe enough to allow a concurrent compilation thread to read them safely
https://bugs.webkit.org/show_bug.cgi?id=114762

Source/JavaScriptCore:

Reviewed by Mark Hahnenberg.

For most inline caches this is easy: the inline cache has a clean temporal
separation between doing the requested action (which may take an unbounded
amount of time, may recurse, and may do arbitrary things) and recording the
relevant information in the cache. So, we just put locks around the
recording bit. That part is always O(1) and does not recurse. The lock we
use is per-CodeBlock to achieve a good balance between locking granularity
and low space overhead. So a concurrent compilation thread will only block
if an inline cache ping-pongs in the code block being compiled (or inlined)
and never when other inline caches do things.

For resolve operations, it's a bit tricky. The global resolve bit works
like any other IC in that it has the clean temporal separation. But the
operations vector itself doesn't have this separation, since we will be
filling it in tandem with actions that may take a long time. This patch
gets around this by having a m_ready bit in the ResolveOperations and
PutToBaseOperation. This is set while holding the CodeBlock's lock. If the
DFG observes the m_ready bit not set (while holding the lock) then it
conservatively assumes that the resolve hasn't happened yet and just
plants a ForceOSRExit.

* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
* bytecode/CodeBlock.h:
(CodeBlock):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/ResolveGlobalStatus.cpp:
(JSC::ResolveGlobalStatus::computeFor):
* bytecode/ResolveOperation.h:
(JSC::ResolveOperations::ResolveOperations):
(ResolveOperations):
(JSC::PutToBaseOperation::PutToBaseOperation):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseResolveOperations):
(JSC::DFG::ByteCodeParser::parseBlock):
* jit/JITStubs.cpp:
(JSC::tryCachePutByID):
(JSC::tryCacheGetByID):
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::setUpCall):
* runtime/JSScope.cpp:
(JSC::JSScope::resolveContainingScopeInternal):
(JSC::JSScope::resolveContainingScope):
(JSC::JSScope::resolvePut):

Source/WTF:

Reviewed by Mark Hahnenberg.

Implemented a new spinlock that is optimized for compactness, by using just a byte.
This will be useful as we start using fine-grained locking on a bunch of places.

At some point I'll make these byte-sized spinlocks into adaptive mutexes, but for
now I think it's fine to do the evil thing and use spinning particularly since we
only use them for short critical sections.

* WTF.xcodeproj/project.pbxproj:
* wtf/Atomics.h:
(WTF):
(WTF::weakCompareAndSwap):
* wtf/ByteSpinLock.h: Added.
(WTF):
(ByteSpinLock):
(WTF::ByteSpinLock::ByteSpinLock):
(WTF::ByteSpinLock::lock):
(WTF::ByteSpinLock::unlock):
(WTF::ByteSpinLock::isHeld):
* wtf/ThreadingPrimitives.h:
(WTF::pauseBriefly):
(WTF):

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@153122 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent ea77149c
2013-04-17 Filip Pizlo <fpizlo@apple.com>
fourthTier: all inline caches should thread-safe enough to allow a concurrent compilation thread to read them safely
https://bugs.webkit.org/show_bug.cgi?id=114762
Reviewed by Mark Hahnenberg.
For most inline caches this is easy: the inline cache has a clean temporal
separation between doing the requested action (which may take an unbounded
amount of time, may recurse, and may do arbitrary things) and recording the
relevant information in the cache. So, we just put locks around the
recording bit. That part is always O(1) and does not recurse. The lock we
use is per-CodeBlock to achieve a good balance between locking granularity
and low space overhead. So a concurrent compilation thread will only block
if an inline cache ping-pongs in the code block being compiled (or inlined)
and never when other inline caches do things.
For resolve operations, it's a bit tricky. The global resolve bit works
like any other IC in that it has the clean temporal separation. But the
operations vector itself doesn't have this separation, since we will be
filling it in tandem with actions that may take a long time. This patch
gets around this by having a m_ready bit in the ResolveOperations and
PutToBaseOperation. This is set while holding the CodeBlock's lock. If the
DFG observes the m_ready bit not set (while holding the lock) then it
conservatively assumes that the resolve hasn't happened yet and just
plants a ForceOSRExit.
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
* bytecode/CodeBlock.h:
(CodeBlock):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/ResolveGlobalStatus.cpp:
(JSC::ResolveGlobalStatus::computeFor):
* bytecode/ResolveOperation.h:
(JSC::ResolveOperations::ResolveOperations):
(ResolveOperations):
(JSC::PutToBaseOperation::PutToBaseOperation):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseResolveOperations):
(JSC::DFG::ByteCodeParser::parseBlock):
* jit/JITStubs.cpp:
(JSC::tryCachePutByID):
(JSC::tryCacheGetByID):
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::setUpCall):
* runtime/JSScope.cpp:
(JSC::JSScope::resolveContainingScopeInternal):
(JSC::JSScope::resolveContainingScope):
(JSC::JSScope::resolvePut):
2013-04-16 Filip Pizlo <fpizlo@apple.com>
fourthTier: DFG should be able to query Structure without modifying it
......
......@@ -97,6 +97,8 @@ CallLinkStatus CallLinkStatus::computeFromLLInt(CodeBlock* profiledBlock, unsign
CallLinkStatus CallLinkStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex)
{
CodeBlock::Locker locker(profiledBlock->m_lock);
UNUSED_PARAM(profiledBlock);
UNUSED_PARAM(bytecodeIndex);
#if ENABLE(JIT) && ENABLE(VALUE_PROFILER)
......
......@@ -3247,12 +3247,12 @@ void CodeBlock::tallyFrequentExitSites()
}
break;
}
#if ENABLE(FTL_JIT)
case JITCode::FTLJIT: {
// There is no easy way to avoid duplicating this code since the FTL::JITCode::osrExit
// vector contains a totally different type, that just so happens to behave like
// DFG::JITCode::osrExit.
#if ENABLE(FTL_JIT)
FTL::JITCode* jitCode = m_jitCode->ftl();
for (unsigned i = 0; i < jitCode->osrExit.size(); ++i) {
FTL::OSRExit& exit = jitCode->osrExit[i];
......@@ -3264,9 +3264,9 @@ void CodeBlock::tallyFrequentExitSites()
dataLog("OSR exit #", i, " (bc#", exit.m_codeOrigin.bytecodeIndex, ", ", exit.m_kind, ") for ", *this, " occurred frequently: counting as frequent exit site.\n");
#endif
}
#endif
break;
}
}
#endif
default:
RELEASE_ASSERT_NOT_REACHED();
......
......@@ -69,6 +69,7 @@
#include "UnconditionalFinalizer.h"
#include "ValueProfile.h"
#include "Watchpoint.h"
#include <wtf/ByteSpinLock.h>
#include <wtf/RefCountedArray.h>
#include <wtf/FastAllocBase.h>
#include <wtf/PassOwnPtr.h>
......@@ -920,6 +921,26 @@ namespace JSC {
int m_numCalleeRegisters;
int m_numVars;
bool m_isConstructor;
// This is intentionally public; it's the responsibility of anyone doing any
// of the following to hold the lock:
//
// - Modifying any inline cache in this code block.
//
// - Quering any inline cache in this code block, from a thread other than
// the main thread.
//
// Additionally, it's only legal to modify the inline cache on the main
// thread. This means that the main thread can query the inline cache without
// locking. This is crucial since executing the inline cache is effectively
// "querying" it.
//
// Another exception to the rules is that the GC can do whatever it wants
// without holding any locks, because the GC is guaranteed to wait until any
// concurrent compilation threads finish what they're doing.
typedef ByteSpinLock Lock;
typedef ByteSpinLocker Locker;
Lock m_lock;
protected:
#if ENABLE(JIT)
......
......@@ -112,6 +112,8 @@ void GetByIdStatus::computeForChain(GetByIdStatus& result, CodeBlock* profiledBl
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
{
CodeBlock::Locker locker(profiledBlock->m_lock);
UNUSED_PARAM(profiledBlock);
UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(ident);
......
......@@ -80,6 +80,8 @@ PutByIdStatus PutByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, unsigned
PutByIdStatus PutByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
{
CodeBlock::Locker locker(profiledBlock->m_lock);
UNUSED_PARAM(profiledBlock);
UNUSED_PARAM(bytecodeIndex);
UNUSED_PARAM(ident);
......
......@@ -48,6 +48,8 @@ static ResolveGlobalStatus computeForStructure(CodeBlock* codeBlock, Structure*
ResolveGlobalStatus ResolveGlobalStatus::computeFor(CodeBlock* codeBlock, int, ResolveOperation* operation, Identifier& identifier)
{
CodeBlock::Locker locker(codeBlock->m_lock);
ASSERT(operation->m_operation == ResolveOperation::GetAndReturnGlobalProperty);
if (!operation->m_structure)
return ResolveGlobalStatus();
......
......@@ -139,7 +139,11 @@ struct ResolveOperation {
}
};
typedef Vector<ResolveOperation> ResolveOperations;
struct ResolveOperations : Vector<ResolveOperation> {
ResolveOperations() : m_ready(false) { }
bool m_ready;
};
struct PutToBaseOperation {
PutToBaseOperation(bool isStrict)
......@@ -147,6 +151,7 @@ struct PutToBaseOperation {
, m_isDynamic(false)
, m_isStrict(isStrict)
, m_predicatePointer(0)
, m_ready(false)
{
}
......@@ -172,6 +177,7 @@ struct PutToBaseOperation {
int32_t m_offsetInButterfly;
};
};
bool m_ready;
};
}
......
......@@ -1786,10 +1786,17 @@ Node* ByteCodeParser::getScope(bool skipTop, unsigned skipCount)
bool ByteCodeParser::parseResolveOperations(SpeculatedType prediction, unsigned identifier, ResolveOperations* resolveOperations, PutToBaseOperation* putToBaseOperation, Node** base, Node** value)
{
if (resolveOperations->isEmpty()) {
addToGraph(ForceOSRExit);
return false;
{
CodeBlock::Locker locker(m_inlineStackTop->m_profiledBlock->m_lock);
if (!resolveOperations->m_ready) {
addToGraph(ForceOSRExit);
return false;
}
}
ASSERT(!resolveOperations->isEmpty());
JSGlobalObject* globalObject = m_inlineStackTop->m_codeBlock->globalObject();
int skipCount = 0;
bool skipTop = false;
......@@ -3150,6 +3157,17 @@ bool ByteCodeParser::parseBlock(unsigned limit)
unsigned identifier = m_inlineStackTop->m_identifierRemap[currentInstruction[2].u.operand];
unsigned value = currentInstruction[3].u.operand;
PutToBaseOperation* putToBase = currentInstruction[4].u.putToBaseOperation;
{
CodeBlock::Locker locker(m_inlineStackTop->m_profiledBlock->m_lock);
if (!putToBase->m_ready) {
addToGraph(ForceOSRExit);
addToGraph(Phantom, get(base));
addToGraph(Phantom, get(value));
NEXT_OPCODE(op_put_to_base);
}
}
if (putToBase->m_isDynamic) {
addToGraph(PutById, OpInfo(identifier), get(base), get(value));
......
......@@ -914,6 +914,8 @@ void performPlatformSpecificJITAssertions(VM* vm)
NEVER_INLINE static void tryCachePutByID(CallFrame* callFrame, CodeBlock* codeBlock, ReturnAddressPtr returnAddress, JSValue baseValue, const PutPropertySlot& slot, StructureStubInfo* stubInfo, bool direct)
{
CodeBlock::Locker locker(codeBlock->m_lock);
// The interpreter checks for recursion here; I do not believe this can occur in CTI.
if (!baseValue.isCell())
......@@ -968,6 +970,8 @@ NEVER_INLINE static void tryCachePutByID(CallFrame* callFrame, CodeBlock* codeBl
NEVER_INLINE static void tryCacheGetByID(CallFrame* callFrame, CodeBlock* codeBlock, ReturnAddressPtr returnAddress, JSValue baseValue, const Identifier& propertyName, const PropertySlot& slot, StructureStubInfo* stubInfo)
{
CodeBlock::Locker locker(codeBlock->m_lock);
// FIXME: Write a test that proves we need to check for recursion here just
// like the interpreter does, then add a check for recursion.
......@@ -1708,6 +1712,8 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_get_by_id_self_fail)
CHECK_FOR_EXCEPTION();
CodeBlock::Locker locker(codeBlock->m_lock);
if (baseValue.isCell()
&& slot.isCacheable()
&& !baseValue.asCell()->structure()->isUncacheableDictionary()
......@@ -1828,6 +1834,8 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_get_by_id_proto_list)
return JSValue::encode(result);
}
CodeBlock::Locker locker(codeBlock->m_lock);
Structure* structure = baseValue.asCell()->structure();
ASSERT(slot.slotBase().isObject());
......@@ -2291,6 +2299,7 @@ inline void* lazyLinkFor(CallFrame* callFrame, CodeSpecializationKind kind)
codePtr = functionExecutable->generatedJITCodeFor(kind)->addressForCall();
}
CodeBlock::Locker locker(callFrame->callerFrame()->codeBlock()->m_lock);
if (!callLinkInfo->seenOnce())
callLinkInfo->setSeen();
else
......@@ -2366,6 +2375,7 @@ DEFINE_STUB_FUNCTION(void*, vm_lazyLinkClosureCall)
if (shouldLink) {
ASSERT(codePtr);
CodeBlock::Locker locker(callerCodeBlock->m_lock);
JIT::compileClosureCall(vm, callLinkInfo, callerCodeBlock, calleeCodeBlock, structure, executable, codePtr);
callLinkInfo->hasSeenClosure = true;
} else
......@@ -2506,8 +2516,17 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_resolve)
STUB_INIT_STACK_FRAME(stackFrame);
CallFrame* callFrame = stackFrame.callFrame;
JSValue result = JSScope::resolve(callFrame, stackFrame.args[0].identifier(), stackFrame.args[1].resolveOperations());
ResolveOperations* operations = stackFrame.args[1].resolveOperations();
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolve(callFrame, stackFrame.args[0].identifier(), operations);
if (willReify) {
CodeBlock::Locker locker(callFrame->codeBlock()->m_lock);
operations->m_ready = true;
}
CHECK_FOR_EXCEPTION_AT_END();
return JSValue::encode(result);
}
......@@ -2519,7 +2538,15 @@ DEFINE_STUB_FUNCTION(void, op_put_to_base)
CallFrame* callFrame = stackFrame.callFrame;
JSValue base = callFrame->r(stackFrame.args[0].int32()).jsValue();
JSValue value = callFrame->r(stackFrame.args[2].int32()).jsValue();
JSScope::resolvePut(callFrame, base, stackFrame.args[1].identifier(), value, stackFrame.args[3].putToBaseOperation());
PutToBaseOperation* operation = stackFrame.args[3].putToBaseOperation();
bool firstTime = !operation->m_ready;
JSScope::resolvePut(callFrame, base, stackFrame.args[1].identifier(), value, operation);
if (firstTime) {
CodeBlock::Locker locker(callFrame->codeBlock()->m_lock);
operation->m_ready = true;
}
CHECK_FOR_EXCEPTION_AT_END();
}
......@@ -2847,16 +2874,36 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_negate)
DEFINE_STUB_FUNCTION(EncodedJSValue, op_resolve_base)
{
STUB_INIT_STACK_FRAME(stackFrame);
return JSValue::encode(JSScope::resolveBase(stackFrame.callFrame, stackFrame.args[0].identifier(), false, stackFrame.args[1].resolveOperations(), stackFrame.args[2].putToBaseOperation()));
ResolveOperations* operations = stackFrame.args[1].resolveOperations();
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolveBase(stackFrame.callFrame, stackFrame.args[0].identifier(), false, operations, stackFrame.args[2].putToBaseOperation());
if (willReify) {
CodeBlock::Locker locker(stackFrame.callFrame->codeBlock()->m_lock);
operations->m_ready = true;
}
return JSValue::encode(result);
}
DEFINE_STUB_FUNCTION(EncodedJSValue, op_resolve_base_strict_put)
{
STUB_INIT_STACK_FRAME(stackFrame);
if (JSValue result = JSScope::resolveBase(stackFrame.callFrame, stackFrame.args[0].identifier(), true, stackFrame.args[1].resolveOperations(), stackFrame.args[2].putToBaseOperation()))
ResolveOperations* operations = stackFrame.args[1].resolveOperations();
bool willReify = operations->isEmpty();
if (JSValue result = JSScope::resolveBase(stackFrame.callFrame, stackFrame.args[0].identifier(), true, operations, stackFrame.args[2].putToBaseOperation())) {
if (willReify) {
CodeBlock::Locker locker(stackFrame.callFrame->codeBlock()->m_lock);
operations->m_ready = true;
}
return JSValue::encode(result);
}
VM_THROW_EXCEPTION();
}
......@@ -3124,7 +3171,13 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_resolve_with_base)
STUB_INIT_STACK_FRAME(stackFrame);
CallFrame* callFrame = stackFrame.callFrame;
JSValue result = JSScope::resolveWithBase(callFrame, stackFrame.args[0].identifier(), &callFrame->registers()[stackFrame.args[1].int32()], stackFrame.args[2].resolveOperations(), stackFrame.args[3].putToBaseOperation());
ResolveOperations* operations = stackFrame.args[2].resolveOperations();
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolveWithBase(callFrame, stackFrame.args[0].identifier(), &callFrame->registers()[stackFrame.args[1].int32()], operations, stackFrame.args[3].putToBaseOperation());
if (willReify) {
CodeBlock::Locker locker(stackFrame.callFrame->codeBlock()->m_lock);
operations->m_ready = true;
}
CHECK_FOR_EXCEPTION_AT_END();
return JSValue::encode(result);
}
......@@ -3134,7 +3187,13 @@ DEFINE_STUB_FUNCTION(EncodedJSValue, op_resolve_with_this)
STUB_INIT_STACK_FRAME(stackFrame);
CallFrame* callFrame = stackFrame.callFrame;
JSValue result = JSScope::resolveWithThis(callFrame, stackFrame.args[0].identifier(), &callFrame->registers()[stackFrame.args[1].int32()], stackFrame.args[2].resolveOperations());
ResolveOperations* operations = stackFrame.args[2].resolveOperations();
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolveWithThis(callFrame, stackFrame.args[0].identifier(), &callFrame->registers()[stackFrame.args[1].int32()], operations);
if (willReify) {
CodeBlock::Locker locker(stackFrame.callFrame->codeBlock()->m_lock);
operations->m_ready = true;
}
CHECK_FOR_EXCEPTION_AT_END();
return JSValue::encode(result);
}
......
......@@ -800,6 +800,12 @@ LLINT_SLOW_PATH_DECL(slow_path_resolve)
default:
break;
}
{
CodeBlock::Locker locker(exec->codeBlock()->m_lock);
operations->m_ready = true;
}
LLINT_RETURN_PROFILED(op_resolve, result);
}
......@@ -816,6 +822,12 @@ LLINT_SLOW_PATH_DECL(slow_path_put_to_base)
default:
break;
}
{
CodeBlock::Locker locker(exec->codeBlock()->m_lock);
operation->m_ready = true;
}
LLINT_END();
}
......@@ -854,6 +866,12 @@ LLINT_SLOW_PATH_DECL(slow_path_resolve_base)
default:
break;
}
{
CodeBlock::Locker locker(exec->codeBlock()->m_lock);
operations->m_ready = true;
}
LLINT_PROFILE_VALUE(op_resolve_base, result);
LLINT_RETURN(result);
}
......@@ -862,7 +880,12 @@ LLINT_SLOW_PATH_DECL(slow_path_resolve_with_base)
{
LLINT_BEGIN();
ResolveOperations* operations = pc[4].u.resolveOperations;
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolveWithBase(exec, exec->codeBlock()->identifier(pc[3].u.operand), &LLINT_OP(1), operations, pc[5].u.putToBaseOperation);
if (willReify) {
CodeBlock::Locker locker(exec->codeBlock()->m_lock);
operations->m_ready = true;
}
LLINT_CHECK_EXCEPTION();
LLINT_OP(2) = result;
LLINT_PROFILE_VALUE(op_resolve_with_base, result);
......@@ -873,7 +896,12 @@ LLINT_SLOW_PATH_DECL(slow_path_resolve_with_this)
{
LLINT_BEGIN();
ResolveOperations* operations = pc[4].u.resolveOperations;
bool willReify = operations->isEmpty();
JSValue result = JSScope::resolveWithThis(exec, exec->codeBlock()->identifier(pc[3].u.operand), &LLINT_OP(1), operations);
if (willReify) {
CodeBlock::Locker locker(exec->codeBlock()->m_lock);
operations->m_ready = true;
}
LLINT_CHECK_EXCEPTION();
LLINT_OP(2) = result;
LLINT_PROFILE_VALUE(op_resolve_with_this, result);
......@@ -911,6 +939,8 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id)
if (!structure->isUncacheableDictionary()
&& !structure->typeInfo().prohibitsPropertyCaching()) {
CodeBlock::Locker locker(codeBlock->m_lock);
pc[4].u.structure.set(
vm, codeBlock->ownerExecutable(), structure);
if (isInlineOffset(slot.cachedOffset())) {
......@@ -976,6 +1006,8 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
&& baseCell == slot.base()) {
if (slot.type() == PutPropertySlot::NewProperty) {
CodeBlock::Locker locker(codeBlock->m_lock);
if (!structure->isDictionary() && structure->previousID()->outOfLineCapacity() == structure->outOfLineCapacity()) {
ASSERT(structure->previousID()->transitionWatchpointSetHasBeenInvalidated());
......@@ -1401,9 +1433,12 @@ inline SlowPathReturnType setUpCall(ExecState* execCallee, Instruction* pc, Code
}
if (!LLINT_ALWAYS_ACCESS_SLOW && callLinkInfo) {
ExecState* execCaller = execCallee->callerFrame();
CodeBlock::Locker locker(execCaller->codeBlock()->m_lock);
if (callLinkInfo->isOnList())
callLinkInfo->remove();
ExecState* execCaller = execCallee->callerFrame();
callLinkInfo->callee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
callLinkInfo->lastSeenCallee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
callLinkInfo->machineCodeTarget = codePtr;
......
......@@ -196,7 +196,7 @@ static bool executeResolveOperations(CallFrame* callFrame, JSScope* scope, const
}
}
template <JSScope::LookupMode mode, JSScope::ReturnValues returnValues> JSObject* JSScope::resolveContainingScopeInternal(CallFrame* callFrame, const Identifier& identifier, PropertySlot& slot, Vector<ResolveOperation>* operations, PutToBaseOperation* putToBaseOperation, bool )
template <JSScope::LookupMode mode, JSScope::ReturnValues returnValues> JSObject* JSScope::resolveContainingScopeInternal(CallFrame* callFrame, const Identifier& identifier, PropertySlot& slot, ResolveOperations* operations, PutToBaseOperation* putToBaseOperation, bool )
{
JSScope* scope = callFrame->scope();
ASSERT(scope);
......@@ -301,6 +301,8 @@ template <JSScope::LookupMode mode, JSScope::ReturnValues returnValues> JSObject
operations->append(ResolveOperation::checkForDynamicEntriesBeforeGlobalScope());
if (putToBaseOperation) {
CodeBlock::Locker locker(callFrame->codeBlock()->m_lock);
putToBaseOperation->m_isDynamic = requiresDynamicChecks;
putToBaseOperation->m_kind = PutToBaseOperation::GlobalPropertyPut;
putToBaseOperation->m_structure.set(callFrame->vm(), callFrame->codeBlock()->ownerExecutable(), globalObject->structure());
......@@ -345,6 +347,8 @@ template <JSScope::LookupMode mode, JSScope::ReturnValues returnValues> JSObject
goto fail;
if (putToBaseOperation) {
CodeBlock::Locker locker(callFrame->codeBlock()->m_lock);
putToBaseOperation->m_kind = entry.isReadOnly() ? PutToBaseOperation::Readonly : PutToBaseOperation::VariablePut;
putToBaseOperation->m_structure.set(callFrame->vm(), callFrame->codeBlock()->ownerExecutable(), callFrame->lexicalGlobalObject()->activationStructure());
putToBaseOperation->m_offset = entry.getIndex();
......@@ -421,7 +425,7 @@ template <JSScope::LookupMode mode, JSScope::ReturnValues returnValues> JSObject
return 0;
}
template <JSScope::ReturnValues returnValues> JSObject* JSScope::resolveContainingScope(CallFrame* callFrame, const Identifier& identifier, PropertySlot& slot, Vector<ResolveOperation>* operations, PutToBaseOperation* putToBaseOperation, bool isStrict)
template <JSScope::ReturnValues returnValues> JSObject* JSScope::resolveContainingScope(CallFrame* callFrame, const Identifier& identifier, PropertySlot& slot, ResolveOperations* operations, PutToBaseOperation* putToBaseOperation, bool isStrict)
{
if (operations->size())
return resolveContainingScopeInternal<KnownResolve, returnValues>(callFrame, identifier, slot, operations, putToBaseOperation, isStrict);
......@@ -607,6 +611,8 @@ void JSScope::resolvePut(CallFrame* callFrame, JSValue base, const Identifier& p
if (slot.base() != baseObject)
return;
ASSERT(!baseObject->hasInlineStorage());
CodeBlock::Locker locker(callFrame->codeBlock()->m_lock);
operation->m_structure.set(callFrame->vm(), callFrame->codeBlock()->ownerExecutable(), baseObject->structure());
setPutPropertyAccessOffset(operation, slot.cachedOffset());
return;
......
2013-04-17 Filip Pizlo <fpizlo@apple.com>
fourthTier: all inline caches should thread-safe enough to allow a concurrent compilation thread to read them safely
https://bugs.webkit.org/show_bug.cgi?id=114762
Reviewed by Mark Hahnenberg.
Implemented a new spinlock that is optimized for compactness, by using just a byte.
This will be useful as we start using fine-grained locking on a bunch of places.
At some point I'll make these byte-sized spinlocks into adaptive mutexes, but for
now I think it's fine to do the evil thing and use spinning particularly since we
only use them for short critical sections.
* WTF.xcodeproj/project.pbxproj:
* wtf/Atomics.h:
(WTF):
(WTF::weakCompareAndSwap):
* wtf/ByteSpinLock.h: Added.
(WTF):
(ByteSpinLock):
(WTF::ByteSpinLock::ByteSpinLock):
(WTF::ByteSpinLock::lock):
(WTF::ByteSpinLock::unlock):
(WTF::ByteSpinLock::isHeld):
* wtf/ThreadingPrimitives.h:
(WTF::pauseBriefly):
(WTF):
2013-04-12 Filip Pizlo <fpizlo@apple.com>
fourthTier: FTL should have OSR exit
......
......@@ -35,10 +35,10 @@
143F61201565F0F900DB514A /* RAMSize.h in Headers */ = {isa = PBXBuildFile; fileRef = 143F611E1565F0F900DB514A /* RAMSize.h */; settings = {ATTRIBUTES = (); }; };
1469419216EAAF6D0024E146 /* RunLoopTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 1469419016EAAF6D0024E146 /* RunLoopTimer.h */; settings = {ATTRIBUTES = (); }; };
1469419316EAAF6D0024E146 /* RunLoopTimerCF.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1469419116EAAF6D0024E146 /* RunLoopTimerCF.cpp */; };
1469419616EAAFF80024E146 /* SchedulePair.h in Headers */ = {isa = PBXBuildFile; fileRef = 1469419416EAAFF80024E146 /* SchedulePair.h */; settings = {ATTRIBUTES = (); }; };
1469419616EAAFF80024E146 /* SchedulePair.h in Headers */ = {isa = PBXBuildFile; fileRef = 1469419416EAAFF80024E146 /* SchedulePair.h */; settings = {ATTRIBUTES = (Private, ); }; };
1469419716EAAFF80024E146 /* SchedulePairMac.mm in Sources */ = {isa = PBXBuildFile; fileRef = 1469419516EAAFF80024E146 /* SchedulePairMac.mm */; };
1469419916EAB0410024E146 /* SchedulePairCF.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1469419816EAB0410024E146 /* SchedulePairCF.cpp */; };
1469419C16EAB10A0024E146 /* AutodrainedPool.h in Headers */ = {isa = PBXBuildFile; fileRef = 1469419A16EAB10A0024E146 /* AutodrainedPool.h */; settings = {ATTRIBUTES = (); }; };
1469419C16EAB10A0024E146 /* AutodrainedPool.h in Headers */ = {isa = PBXBuildFile; fileRef = 1469419A16EAB10A0024E146 /* AutodrainedPool.h */; settings = {ATTRIBUTES = (Private, ); }; };
1469419D16EAB10A0024E146 /* AutodrainedPoolMac.mm in Sources */ = {isa = PBXBuildFile; fileRef = 1469419B16EAB10A0024E146 /* AutodrainedPoolMac.mm */; };
149EF16316BBFE0D000A4331 /* TriState.h in Headers */ = {isa = PBXBuildFile; fileRef = 149EF16216BBFE0D000A4331 /* TriState.h */; settings = {ATTRIBUTES = (); }; };
14F3B0F715E45E4600210069 /* SaturatedArithmetic.h in Headers */ = {isa = PBXBuildFile; fileRef = 14F3B0F615E45E4600210069 /* SaturatedArithmetic.h */; settings = {ATTRIBUTES = (); }; };
......@@ -295,6 +295,7 @@
0FD81AC4154FB22E00983E72 /* FastBitVector.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FastBitVector.h; sourceTree = "<group>"; };
0FDDBFA51666DFA300C55FEF /* StringPrintStream.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StringPrintStream.cpp; sourceTree = "<group>"; };
0FDDBFA61666DFA300C55FEF /* StringPrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StringPrintStream.h; sourceTree = "<group>"; };
0FEC3EE4171B834700FDAC8D /* ByteSpinLock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ByteSpinLock.h; sourceTree = "<group>"; };
143F611D1565F0F900DB514A /* RAMSize.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = RAMSize.cpp; sourceTree = "<group>"; };
143F611E1565F0F900DB514A /* RAMSize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RAMSize.h; sourceTree = "<group>"; };
1469419016EAAF6D0024E146 /* RunLoopTimer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RunLoopTimer.h; sourceTree = "<group>"; };
......@@ -626,6 +627,7 @@
A8A47266151A825A004123FF /* BoundsCheckedPointer.h */,
A8A47267151A825A004123FF /* BumpPointerAllocator.h */,
EB95E1EF161A72410089A2F5 /* ByteOrder.h */,
0FEC3EE4171B834700FDAC8D /* ByteSpinLock.h */,
A8A4726A151A825A004123FF /* CheckedArithmetic.h */,
A8A4726B151A825A004123FF /* CheckedBoolean.h */,
0FC4EDE51696149600F65041 /* CommaPrinter.h */,
......
......@@ -270,6 +270,47 @@ inline void memoryBarrierBeforeUnlock() { compilerFence(); }
#endif
inline bool weakCompareAndSwap(uint8_t* location, uint8_t expected, uint8_t newValue)
{
#if ENABLE(COMPARE_AND_SWAP)
#if CPU(X86) || CPU(X86_64)
unsigned char result;
asm volatile(
"lock; cmpxchgb %3, %2\n\t"
"sete %1"
: "+a"(expected), "=q"(result), "+m"(*location)
: "r"(newValue)
: "memory"
);
return result;
#else
uintptr_t locationValue = bitwise_cast<uintptr_t>(location);
uintptr_t alignedLocationValue = locationValue & ~(sizeof(unsigned) - 1);
uintptr_t locationOffset = locationValue - alignedLocationValue;
ASSERT(locationOffset < sizeof(unsigned));
unsigned* alignedLocation = bitwise_cast<unsigned*>(alignedLocationValue);
// Make sure that this load is always issued and never optimized away.
unsigned oldAlignedValue = *const_cast<volatile unsigned*>(alignedLocation);