Commit c14eb7d0 authored by oliver@apple.com's avatar oliver@apple.com

fourthTier: value profiles and array profiles should be thread-safe enough to...

fourthTier: value profiles and array profiles should be thread-safe enough to be accessible in a concurrent compilation thread
https://bugs.webkit.org/show_bug.cgi?id=114906

Source/JavaScriptCore:

Reviewed by Oliver Hunt.

This introduces thread safety to value profiles, array profiles, and
array allocation profiles.

We already have three separate operations that happen on profiles:
(1) writing, which the JIT, LLInt, and OSR exit do; (2) updating,
which happens during GC, from OSR entry slow-paths, and in the DFG;
and (3) reading, which happens in the DFG. For example, the JIT/LLInt
and OSR exit write to ValueProfile::m_buckets, which gets synthesized
into ValueProfile::m_prediction (and other fields) during update, and
the latter gets read by the DFG. Note that (2) must also happen in
the DFG since only the DFG knows which code blocks it will inline,
and those blocks' profiles may not have otherwise been updated via
any other mechanism.

I refer to these three operations as writing, updating, and reading.

Consequently, both profile updating and profile reading may happen
asynchronously, if the JIT is asynchronous.

The locking protocol for profiles works as follows:

- Writing does not require locking, but is only allowed on the main
  thread. We require that these fields can be stored atomically by
  the profiling code, even without locks. For value profiles, this
  only works on 64-bit platforms, currently. For array profiles,
  which consist of multiple separate fields, this means that an
  asynchronous update of the profile may see slight inconsistencies
  (like a structure that doesn't quite match the array modes bits),
  but these should be harmless: at worst, the DFG will specialize
  too much and we'll have OSR exits.

- Updating a value profile requires holding a lock, but must assume
  that the fields written by the profiling code in JIT/LLInt may
  be written to without locking.

- Reading a value profile requires holding a lock.

The one major exception to these rules is the ArrayAllocationProfile,
which requires no locking. We do this because it's used so often and
in places where we don't necessarily have access to the owning
CodeBlock, so if we did want it to be locked it would have to have
its own lock. Also, I believe that it is sound to just make this
profile racy and not worry about locking at all. All that was needed
were some changes to ensure that we explicitly read some raced-over
fields only once.

Two additional interesting things in this change:

- To make it easy to see which profile methods require locking, they
  take a const CodeBlockLocker& as an argument. I saw this idiom for
  identifying which methods require which locks to be held being used
  in LLVM, and I quite like it.

- Lazy operand value profiles, which are created lazily and at any
  time, require the CodeBlockLock to be held when they are being
  created. Writes to them are lockless and main-thread-only, but as
  with other profiles, updates and reads require locking.

* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/ArrayAllocationProfile.cpp:
(JSC::ArrayAllocationProfile::updateIndexingType):
* bytecode/ArrayAllocationProfile.h:
(JSC::ArrayAllocationProfile::selectIndexingType):
* bytecode/ArrayProfile.cpp:
(JSC::ArrayProfile::computeUpdatedPrediction):
(JSC::ArrayProfile::briefDescription):
* bytecode/ArrayProfile.h:
(ArrayProfile):
(JSC::ArrayProfile::expectedStructure):
(JSC::ArrayProfile::structureIsPolymorphic):
(JSC::ArrayProfile::hasDefiniteStructure):
(JSC::ArrayProfile::observedArrayModes):
(JSC::ArrayProfile::mayInterceptIndexedAccesses):
(JSC::ArrayProfile::mayStoreToHole):
(JSC::ArrayProfile::outOfBounds):
(JSC::ArrayProfile::usesOriginalArrayStructures):
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpValueProfiling):
(JSC::CodeBlock::dumpArrayProfiling):
(JSC::CodeBlock::updateAllPredictionsAndCountLiveness):
(JSC::CodeBlock::updateAllArrayPredictions):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::valueProfilePredictionForBytecodeOffset):
(JSC::CodeBlock::updateAllPredictionsAndCheckIfShouldOptimizeNow):
(CodeBlock):
* bytecode/CodeBlockLock.h: Added.
(JSC):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/LazyOperandValueProfile.cpp:
(JSC::CompressedLazyOperandValueProfileHolder::computeUpdatedPredictions):
(JSC::CompressedLazyOperandValueProfileHolder::add):
(JSC::LazyOperandValueProfileParser::LazyOperandValueProfileParser):
(JSC::LazyOperandValueProfileParser::~LazyOperandValueProfileParser):
(JSC):
(JSC::LazyOperandValueProfileParser::initialize):
(JSC::LazyOperandValueProfileParser::prediction):
* bytecode/LazyOperandValueProfile.h:
(CompressedLazyOperandValueProfileHolder):
(LazyOperandValueProfileParser):
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::getSpecFailBucket):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/ResolveGlobalStatus.cpp:
(JSC::ResolveGlobalStatus::computeFor):
* bytecode/ValueProfile.h:
(JSC::ValueProfileBase::briefDescription):
(ValueProfileBase):
(JSC::ValueProfileBase::computeUpdatedPrediction):
* dfg/DFGArrayMode.cpp:
(JSC::DFG::ArrayMode::fromObserved):
* dfg/DFGArrayMode.h:
(ArrayMode):
(JSC::DFG::ArrayMode::withProfile):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::getArrayMode):
(JSC::DFG::ByteCodeParser::getArrayModeAndEmitChecks):
(JSC::DFG::ByteCodeParser::parseResolveOperations):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGOSRExitPreparation.cpp:
(JSC::DFG::prepareCodeOriginForOSRExit):
* dfg/DFGPredictionInjectionPhase.cpp:
(JSC::DFG::PredictionInjectionPhase::run):
* jit/JITInlines.h:
(JSC::JIT::chooseArrayMode):
* jit/JITStubs.cpp:
(JSC::tryCachePutByID):
(JSC::tryCacheGetByID):
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::setUpCall):
* profiler/ProfilerBytecodeSequence.cpp:
(JSC::Profiler::BytecodeSequence::BytecodeSequence):
* runtime/JSScope.cpp:
(JSC::JSScope::resolveContainingScopeInternal):
(JSC::JSScope::resolvePut):

Source/WTF:

Reviewed by Oliver Hunt.

Add ability to abstract whether or not the CodeBlock requires locking at all,
since some platforms may not support the byte spin-locking and/or may not want
to, if they turn off concurrent JIT.

* WTF.xcodeproj/project.pbxproj:
* wtf/ByteSpinLock.h:
* wtf/NoLock.h: Added.
(WTF):
(NoLock):
(WTF::NoLock::lock):
(WTF::NoLock::unlock):
(WTF::NoLock::isHeld):
* wtf/Platform.h:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@153123 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 0c1b13e9
2013-04-20 Filip Pizlo <fpizlo@apple.com>
fourthTier: value profiles and array profiles should be thread-safe enough to be accessible in a concurrent compilation thread
https://bugs.webkit.org/show_bug.cgi?id=114906
Reviewed by Oliver Hunt.
This introduces thread safety to value profiles, array profiles, and
array allocation profiles.
We already have three separate operations that happen on profiles:
(1) writing, which the JIT, LLInt, and OSR exit do; (2) updating,
which happens during GC, from OSR entry slow-paths, and in the DFG;
and (3) reading, which happens in the DFG. For example, the JIT/LLInt
and OSR exit write to ValueProfile::m_buckets, which gets synthesized
into ValueProfile::m_prediction (and other fields) during update, and
the latter gets read by the DFG. Note that (2) must also happen in
the DFG since only the DFG knows which code blocks it will inline,
and those blocks' profiles may not have otherwise been updated via
any other mechanism.
I refer to these three operations as writing, updating, and reading.
Consequently, both profile updating and profile reading may happen
asynchronously, if the JIT is asynchronous.
The locking protocol for profiles works as follows:
- Writing does not require locking, but is only allowed on the main
thread. We require that these fields can be stored atomically by
the profiling code, even without locks. For value profiles, this
only works on 64-bit platforms, currently. For array profiles,
which consist of multiple separate fields, this means that an
asynchronous update of the profile may see slight inconsistencies
(like a structure that doesn't quite match the array modes bits),
but these should be harmless: at worst, the DFG will specialize
too much and we'll have OSR exits.
- Updating a value profile requires holding a lock, but must assume
that the fields written by the profiling code in JIT/LLInt may
be written to without locking.
- Reading a value profile requires holding a lock.
The one major exception to these rules is the ArrayAllocationProfile,
which requires no locking. We do this because it's used so often and
in places where we don't necessarily have access to the owning
CodeBlock, so if we did want it to be locked it would have to have
its own lock. Also, I believe that it is sound to just make this
profile racy and not worry about locking at all. All that was needed
were some changes to ensure that we explicitly read some raced-over
fields only once.
Two additional interesting things in this change:
- To make it easy to see which profile methods require locking, they
take a const CodeBlockLocker& as an argument. I saw this idiom for
identifying which methods require which locks to be held being used
in LLVM, and I quite like it.
- Lazy operand value profiles, which are created lazily and at any
time, require the CodeBlockLock to be held when they are being
created. Writes to them are lockless and main-thread-only, but as
with other profiles, updates and reads require locking.
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/ArrayAllocationProfile.cpp:
(JSC::ArrayAllocationProfile::updateIndexingType):
* bytecode/ArrayAllocationProfile.h:
(JSC::ArrayAllocationProfile::selectIndexingType):
* bytecode/ArrayProfile.cpp:
(JSC::ArrayProfile::computeUpdatedPrediction):
(JSC::ArrayProfile::briefDescription):
* bytecode/ArrayProfile.h:
(ArrayProfile):
(JSC::ArrayProfile::expectedStructure):
(JSC::ArrayProfile::structureIsPolymorphic):
(JSC::ArrayProfile::hasDefiniteStructure):
(JSC::ArrayProfile::observedArrayModes):
(JSC::ArrayProfile::mayInterceptIndexedAccesses):
(JSC::ArrayProfile::mayStoreToHole):
(JSC::ArrayProfile::outOfBounds):
(JSC::ArrayProfile::usesOriginalArrayStructures):
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpValueProfiling):
(JSC::CodeBlock::dumpArrayProfiling):
(JSC::CodeBlock::updateAllPredictionsAndCountLiveness):
(JSC::CodeBlock::updateAllArrayPredictions):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::valueProfilePredictionForBytecodeOffset):
(JSC::CodeBlock::updateAllPredictionsAndCheckIfShouldOptimizeNow):
(CodeBlock):
* bytecode/CodeBlockLock.h: Added.
(JSC):
* bytecode/GetByIdStatus.cpp:
(JSC::GetByIdStatus::computeFor):
* bytecode/LazyOperandValueProfile.cpp:
(JSC::CompressedLazyOperandValueProfileHolder::computeUpdatedPredictions):
(JSC::CompressedLazyOperandValueProfileHolder::add):
(JSC::LazyOperandValueProfileParser::LazyOperandValueProfileParser):
(JSC::LazyOperandValueProfileParser::~LazyOperandValueProfileParser):
(JSC):
(JSC::LazyOperandValueProfileParser::initialize):
(JSC::LazyOperandValueProfileParser::prediction):
* bytecode/LazyOperandValueProfile.h:
(CompressedLazyOperandValueProfileHolder):
(LazyOperandValueProfileParser):
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::getSpecFailBucket):
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeFor):
* bytecode/ResolveGlobalStatus.cpp:
(JSC::ResolveGlobalStatus::computeFor):
* bytecode/ValueProfile.h:
(JSC::ValueProfileBase::briefDescription):
(ValueProfileBase):
(JSC::ValueProfileBase::computeUpdatedPrediction):
* dfg/DFGArrayMode.cpp:
(JSC::DFG::ArrayMode::fromObserved):
* dfg/DFGArrayMode.h:
(ArrayMode):
(JSC::DFG::ArrayMode::withProfile):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::getPredictionWithoutOSRExit):
(JSC::DFG::ByteCodeParser::getArrayMode):
(JSC::DFG::ByteCodeParser::getArrayModeAndEmitChecks):
(JSC::DFG::ByteCodeParser::parseResolveOperations):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGOSRExitPreparation.cpp:
(JSC::DFG::prepareCodeOriginForOSRExit):
* dfg/DFGPredictionInjectionPhase.cpp:
(JSC::DFG::PredictionInjectionPhase::run):
* jit/JITInlines.h:
(JSC::JIT::chooseArrayMode):
* jit/JITStubs.cpp:
(JSC::tryCachePutByID):
(JSC::tryCacheGetByID):
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::setUpCall):
* profiler/ProfilerBytecodeSequence.cpp:
(JSC::Profiler::BytecodeSequence::BytecodeSequence):
* runtime/JSScope.cpp:
(JSC::JSScope::resolveContainingScopeInternal):
(JSC::JSScope::resolvePut):
2013-04-17 Filip Pizlo <fpizlo@apple.com>
fourthTier: all inline caches should thread-safe enough to allow a concurrent compilation thread to read them safely
......
......@@ -73,6 +73,7 @@
0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0CD4C215F1A6070032F1C0 /* PutDirectIndexMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0CD4C015F1A6040032F1C0 /* PutDirectIndexMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0CD4C415F6B6BB0032F1C0 /* SparseArrayValueMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */; };
0F0D85B21723455400338210 /* CodeBlockLock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0D85B11723455100338210 /* CodeBlockLock.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F0FC45A14BD15F500B81154 /* LLIntCallLinkInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
0F13912916771C33009CCB07 /* ProfilerBytecodeSequence.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F13912416771C30009CCB07 /* ProfilerBytecodeSequence.cpp */; };
0F13912A16771C36009CCB07 /* ProfilerBytecodeSequence.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F13912516771C30009CCB07 /* ProfilerBytecodeSequence.h */; settings = {ATTRIBUTES = (Private, ); }; };
......@@ -1042,6 +1043,7 @@
0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallReturnOffsetToBytecodeOffset.h; sourceTree = "<group>"; };
0F0CD4C015F1A6040032F1C0 /* PutDirectIndexMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PutDirectIndexMode.h; sourceTree = "<group>"; };
0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SparseArrayValueMap.cpp; sourceTree = "<group>"; };
0F0D85B11723455100338210 /* CodeBlockLock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CodeBlockLock.h; sourceTree = "<group>"; };
0F0FC45814BD15F100B81154 /* LLIntCallLinkInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LLIntCallLinkInfo.h; sourceTree = "<group>"; };
0F13912416771C30009CCB07 /* ProfilerBytecodeSequence.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = ProfilerBytecodeSequence.cpp; path = profiler/ProfilerBytecodeSequence.cpp; sourceTree = "<group>"; };
0F13912516771C30009CCB07 /* ProfilerBytecodeSequence.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = ProfilerBytecodeSequence.h; path = profiler/ProfilerBytecodeSequence.h; sourceTree = "<group>"; };
......@@ -3076,6 +3078,7 @@
969A07910ED1D3AE00F1F681 /* CodeBlock.h */,
0F8F943D1667632D00D61971 /* CodeBlockHash.cpp */,
0F8F943E1667632D00D61971 /* CodeBlockHash.h */,
0F0D85B11723455100338210 /* CodeBlockLock.h */,
0F96EBB116676EF4008BADE3 /* CodeBlockWithJITType.h */,
0F8F9445166764EE00D61971 /* CodeOrigin.cpp */,
0FBD7E671447998F00481315 /* CodeOrigin.h */,
......@@ -3173,6 +3176,7 @@
0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */,
BC18C3E60E16F5CD00B34460 /* ArrayConstructor.h in Headers */,
0FB7F39515ED8E4600F167B2 /* ArrayConventions.h in Headers */,
0F0D85B21723455400338210 /* CodeBlockLock.h in Headers */,
0F63945515D07057006A597C /* ArrayProfile.h in Headers */,
BC18C3E70E16F5CD00B34460 /* ArrayPrototype.h in Headers */,
BC18C5240E16FC8A00B34460 /* ArrayPrototype.lut.h in Headers */,
......
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -32,9 +32,24 @@ namespace JSC {
void ArrayAllocationProfile::updateIndexingType()
{
if (!m_lastArray)
// This is awkwardly racy but totally sound even when executed concurrently. The
// worst cases go something like this:
//
// - Two threads race to execute this code; one of them succeeds in updating the
// m_currentIndexingType and the other either updates it again, or sees a null
// m_lastArray; if it updates it again then at worst it will cause the profile
// to "forget" some array. That's still sound, since we don't promise that
// this profile is a reflection of any kind of truth.
//
// - A concurrent thread reads m_lastArray, but that array is now dead. While
// it's possible for that array to no longer be reachable, it cannot actually
// be freed, since we require the GC to wait until all concurrent JITing
// finishes.
JSArray* lastArray = m_lastArray;
if (!lastArray)
return;
m_currentIndexingType = leastUpperBoundOfIndexingTypes(m_currentIndexingType, m_lastArray->structure()->indexingType());
m_currentIndexingType = leastUpperBoundOfIndexingTypes(m_currentIndexingType, lastArray->structure()->indexingType());
m_lastArray = 0;
}
......
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -41,7 +41,8 @@ public:
IndexingType selectIndexingType()
{
if (m_lastArray && UNLIKELY(m_lastArray->structure()->indexingType() != m_currentIndexingType))
JSArray* lastArray = m_lastArray;
if (lastArray && UNLIKELY(lastArray->structure()->indexingType() != m_currentIndexingType))
updateIndexingType();
return m_currentIndexingType;
}
......
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -74,53 +74,35 @@ void dumpArrayModes(PrintStream& out, ArrayModes arrayModes)
out.print("ArrayWithSlowPutArrayStorage");
}
ArrayModes ArrayProfile::updatedObservedArrayModes() const
void ArrayProfile::computeUpdatedPrediction(const CodeBlockLocker& locker, CodeBlock* codeBlock, OperationInProgress operation)
{
if (m_lastSeenStructure)
return m_observedArrayModes | arrayModeFromStructure(m_lastSeenStructure);
return m_observedArrayModes;
}
void ArrayProfile::computeUpdatedPrediction(CodeBlock* codeBlock, OperationInProgress operation)
{
const bool verbose = false;
if (m_lastSeenStructure) {
m_observedArrayModes |= arrayModeFromStructure(m_lastSeenStructure);
m_mayInterceptIndexedAccesses |=
m_lastSeenStructure->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero();
if (!codeBlock->globalObject()->isOriginalArrayStructure(m_lastSeenStructure))
m_usesOriginalArrayStructures = false;
if (!structureIsPolymorphic()) {
if (!structureIsPolymorphic(locker)) {
if (!m_expectedStructure)
m_expectedStructure = m_lastSeenStructure;
else if (m_expectedStructure != m_lastSeenStructure) {
if (verbose)
dataLog(*codeBlock, " bc#", m_bytecodeOffset, ": making structure polymorphic because ", RawPointer(m_expectedStructure), " (", m_expectedStructure->classInfo()->className, ") != ", RawPointer(m_lastSeenStructure), " (", m_lastSeenStructure->classInfo()->className, ")\n");
else if (m_expectedStructure != m_lastSeenStructure)
m_expectedStructure = polymorphicStructure();
}
}
m_lastSeenStructure = 0;
}
if (hasTwoOrMoreBitsSet(m_observedArrayModes)) {
if (verbose)
dataLog(*codeBlock, " bc#", m_bytecodeOffset, ": making structure polymorphic because two or more bits are set in m_observedArrayModes\n");
if (hasTwoOrMoreBitsSet(m_observedArrayModes))
m_expectedStructure = polymorphicStructure();
}
if (operation == Collection
&& expectedStructure()
&& !Heap::isMarked(m_expectedStructure)) {
if (verbose)
dataLog(*codeBlock, " bc#", m_bytecodeOffset, ": making structure during GC\n");
&& expectedStructure(locker)
&& !Heap::isMarked(m_expectedStructure))
m_expectedStructure = polymorphicStructure();
}
}
CString ArrayProfile::briefDescription(CodeBlock* codeBlock)
CString ArrayProfile::briefDescription(const CodeBlockLocker& locker, CodeBlock* codeBlock)
{
computeUpdatedPrediction(codeBlock);
computeUpdatedPrediction(locker, codeBlock);
StringPrintStream out;
......@@ -133,7 +115,7 @@ CString ArrayProfile::briefDescription(CodeBlock* codeBlock)
hasPrinted = true;
}
if (structureIsPolymorphic()) {
if (structureIsPolymorphic(locker)) {
if (hasPrinted)
out.print(", ");
out.print("struct = TOP");
......
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -26,6 +26,7 @@
#ifndef ArrayProfile_h
#define ArrayProfile_h
#include "CodeBlockLock.h"
#include "JSArray.h"
#include "Structure.h"
#include <wtf/HashMap.h>
......@@ -163,32 +164,31 @@ public:
m_lastSeenStructure = structure;
}
void computeUpdatedPrediction(CodeBlock*, OperationInProgress = NoOperation);
void computeUpdatedPrediction(const CodeBlockLocker&, CodeBlock*, OperationInProgress = NoOperation);
Structure* expectedStructure() const
Structure* expectedStructure(const CodeBlockLocker& locker) const
{
if (structureIsPolymorphic())
if (structureIsPolymorphic(locker))
return 0;
return m_expectedStructure;
}
bool structureIsPolymorphic() const
bool structureIsPolymorphic(const CodeBlockLocker&) const
{
return m_expectedStructure == polymorphicStructure();
}
bool hasDefiniteStructure() const
bool hasDefiniteStructure(const CodeBlockLocker& locker) const
{
return !structureIsPolymorphic() && m_expectedStructure;
return !structureIsPolymorphic(locker) && m_expectedStructure;
}
ArrayModes observedArrayModes() const { return m_observedArrayModes; }
ArrayModes updatedObservedArrayModes() const; // Computes the observed array modes without updating the profile.
bool mayInterceptIndexedAccesses() const { return m_mayInterceptIndexedAccesses; }
ArrayModes observedArrayModes(const CodeBlockLocker&) const { return m_observedArrayModes; }
bool mayInterceptIndexedAccesses(const CodeBlockLocker&) const { return m_mayInterceptIndexedAccesses; }
bool mayStoreToHole() const { return m_mayStoreToHole; }
bool outOfBounds() const { return m_outOfBounds; }
bool mayStoreToHole(const CodeBlockLocker&) const { return m_mayStoreToHole; }
bool outOfBounds(const CodeBlockLocker&) const { return m_outOfBounds; }
bool usesOriginalArrayStructures() const { return m_usesOriginalArrayStructures; }
bool usesOriginalArrayStructures(const CodeBlockLocker&) const { return m_usesOriginalArrayStructures; }
CString briefDescription(CodeBlock*);
CString briefDescription(const CodeBlockLocker&, CodeBlock*);
private:
friend class LLIntOffsetsExtractor;
......
......@@ -97,7 +97,7 @@ CallLinkStatus CallLinkStatus::computeFromLLInt(CodeBlock* profiledBlock, unsign
CallLinkStatus CallLinkStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex)
{
CodeBlock::Locker locker(profiledBlock->m_lock);
CodeBlockLocker locker(profiledBlock->m_lock);
UNUSED_PARAM(profiledBlock);
UNUSED_PARAM(bytecodeIndex);
......
......@@ -661,9 +661,11 @@ void CodeBlock::beginDumpProfiling(PrintStream& out, bool& hasPrintedProfiling)
void CodeBlock::dumpValueProfiling(PrintStream& out, const Instruction*& it, bool& hasPrintedProfiling)
{
CodeBlockLocker locker(m_lock);
++it;
#if ENABLE(VALUE_PROFILER)
CString description = it->u.profile->briefDescription();
CString description = it->u.profile->briefDescription(locker);
if (!description.length())
return;
beginDumpProfiling(out, hasPrintedProfiling);
......@@ -676,9 +678,11 @@ void CodeBlock::dumpValueProfiling(PrintStream& out, const Instruction*& it, boo
void CodeBlock::dumpArrayProfiling(PrintStream& out, const Instruction*& it, bool& hasPrintedProfiling)
{
CodeBlockLocker locker(m_lock);
++it;
#if ENABLE(VALUE_PROFILER)
CString description = it->u.arrayProfile->briefDescription(this);
CString description = it->u.arrayProfile->briefDescription(locker, this);
if (!description.length())
return;
beginDumpProfiling(out, hasPrintedProfiling);
......@@ -3145,6 +3149,8 @@ ArrayProfile* CodeBlock::getOrAddArrayProfile(unsigned bytecodeOffset)
void CodeBlock::updateAllPredictionsAndCountLiveness(
OperationInProgress operation, unsigned& numberOfLiveNonArgumentValueProfiles, unsigned& numberOfSamplesInProfiles)
{
CodeBlockLock locker(m_lock);
numberOfLiveNonArgumentValueProfiles = 0;
numberOfSamplesInProfiles = 0; // If this divided by ValueProfile::numberOfBuckets equals numberOfValueProfiles() then value profiles are full.
for (unsigned i = 0; i < totalNumberOfValueProfiles(); ++i) {
......@@ -3154,16 +3160,16 @@ void CodeBlock::updateAllPredictionsAndCountLiveness(
numSamples = ValueProfile::numberOfBuckets; // We don't want profiles that are extremely hot to be given more weight.
numberOfSamplesInProfiles += numSamples;
if (profile->m_bytecodeOffset < 0) {
profile->computeUpdatedPrediction(operation);
profile->computeUpdatedPrediction(locker, operation);
continue;
}
if (profile->numberOfSamples() || profile->m_prediction != SpecNone)
numberOfLiveNonArgumentValueProfiles++;
profile->computeUpdatedPrediction(operation);
profile->computeUpdatedPrediction(locker, operation);
}
#if ENABLE(DFG_JIT)
m_lazyOperandValueProfiles.computeUpdatedPredictions(operation);
m_lazyOperandValueProfiles.computeUpdatedPredictions(locker, operation);
#endif
}
......@@ -3175,8 +3181,10 @@ void CodeBlock::updateAllValueProfilePredictions(OperationInProgress operation)
void CodeBlock::updateAllArrayPredictions(OperationInProgress operation)
{
CodeBlockLock locker(m_lock);
for (unsigned i = m_arrayProfiles.size(); i--;)
m_arrayProfiles[i].computeUpdatedPrediction(this, operation);
m_arrayProfiles[i].computeUpdatedPrediction(locker, this, operation);
// Don't count these either, for similar reasons.
for (unsigned i = m_arrayAllocationProfiles.size(); i--;)
......
......@@ -36,6 +36,7 @@
#include "CallLinkInfo.h"
#include "CallReturnOffsetToBytecodeOffset.h"
#include "CodeBlockHash.h"
#include "CodeBlockLock.h"
#include "CodeOrigin.h"
#include "CodeType.h"
#include "CompactJITCodeMap.h"
......@@ -69,7 +70,6 @@
#include "UnconditionalFinalizer.h"
#include "ValueProfile.h"
#include "Watchpoint.h"
#include <wtf/ByteSpinLock.h>
#include <wtf/RefCountedArray.h>
#include <wtf/FastAllocBase.h>
#include <wtf/PassOwnPtr.h>
......@@ -485,9 +485,9 @@ namespace JSC {
bytecodeOffset].u.opcode)) - 1].u.profile == result);
return result;
}
SpeculatedType valueProfilePredictionForBytecodeOffset(int bytecodeOffset)
SpeculatedType valueProfilePredictionForBytecodeOffset(const CodeBlockLocker& locker, int bytecodeOffset)
{
return valueProfileForBytecodeOffset(bytecodeOffset)->computeUpdatedPrediction();
return valueProfileForBytecodeOffset(bytecodeOffset)->computeUpdatedPrediction(locker);
}
unsigned totalNumberOfValueProfiles()
......@@ -902,7 +902,7 @@ namespace JSC {
void updateAllArrayPredictions(OperationInProgress = NoOperation);
void updateAllPredictions(OperationInProgress = NoOperation);
#else
bool shouldOptimizeNow() { return false; }
bool updateAllPredictionsAndCheckIfShouldOptimizeNow() { return false; }
void updateAllValueProfilePredictions(OperationInProgress = NoOperation) { }
void updateAllArrayPredictions(OperationInProgress = NoOperation) { }
void updateAllPredictions(OperationInProgress = NoOperation) { }
......@@ -938,9 +938,7 @@ namespace JSC {
// Another exception to the rules is that the GC can do whatever it wants
// without holding any locks, because the GC is guaranteed to wait until any
// concurrent compilation threads finish what they're doing.
typedef ByteSpinLock Lock;
typedef ByteSpinLocker Locker;
Lock m_lock;
CodeBlockLock m_lock;
protected:
#if ENABLE(JIT)
......
/*
* Copyright (C) 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
* PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
* OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef CodeBlockLock_h
#define CodeBlockLock_h
#include <wtf/ByteSpinLock.h>
#include <wtf/NoLock.h>
namespace JSC {
#if ENABLE(CONCURRENT_JIT)
typedef ByteSpinLock CodeBlockLock;
typedef ByteSpinLocker CodeBlockLocker;
#else
typedef NoLock CodeBlockLock;
typedef NoLockLocker CodeBlockLocker;
#endif
} // namespace JSC
#endif // CodeBlockLock_h
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -112,7 +112,7 @@ void GetByIdStatus::computeForChain(GetByIdStatus& result, CodeBlock* profiledBl
GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, unsigned bytecodeIndex, Identifier& ident)
{
CodeBlock::Locker locker(profiledBlock->m_lock);
CodeBlockLocker locker(profiledBlock->m_lock);
UNUSED_PARAM(profiledBlock);
UNUSED_PARAM(bytecodeIndex);
......
/*
* Copyright (C) 2012 Apple Inc. All rights reserved.
* Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
......@@ -35,17 +35,17 @@ namespace JSC {
CompressedLazyOperandValueProfileHolder::CompressedLazyOperandValueProfileHolder() { }
CompressedLazyOperandValueProfileHolder::~CompressedLazyOperandValueProfileHolder() { }
void CompressedLazyOperandValueProfileHolder::computeUpdatedPredictions(OperationInProgress operation)
void CompressedLazyOperandValueProfileHolder::computeUpdatedPredictions(const CodeBlockLocker& locker, OperationInProgress operation)
{
if (!m_data)
return;
for (unsigned i = 0; i < m_data->size(); ++i)
m_data->at(i).computeUpdatedPrediction(operation);
m_data->at(i).computeUpdatedPrediction(locker, operation);
}
LazyOperandValueProfile* CompressedLazyOperandValueProfileHolder::add(
const LazyOperandValueProfileKey& key)
const CodeBlockLocker&, const LazyOperandValueProfileKey& key)
{
if (!m_data)
m_data = adoptPtr(new LazyOperandValueProfile::List());
......@@ -60,20 +60,22 @@ LazyOperandValueProfile* CompressedLazyOperandValueProfileHolder::add(
return &m_data->last();
}
LazyOperandValueProfileParser::LazyOperandValueProfileParser(
CompressedLazyOperandValueProfileHolder& holder)
: m_holder(holder)
LazyOperandValueProfileParser::LazyOperandValueProfileParser() { }
LazyOperandValueProfileParser::~LazyOperandValueProfileParser() { }
void LazyOperandValueProfileParser::initialize(
const CodeBlockLocker&, CompressedLazyOperandValueProfileHolder& holder)
{
if (!m_holder.m_data)
ASSERT(m_map.isEmpty());
if (!holder.m_data)
return;
LazyOperandValueProfile::List& data = *m_holder.m_data;
LazyOperandValueProfile::List& data = *holder.m_data;
for (unsigned i = 0; i < data.size(); ++i)