Commit d2cdd316 authored by oliver@apple.com's avatar oliver@apple.com

fourthTier: add heuristics to reduce the likelihood of a trivially inlineable...

fourthTier: add heuristics to reduce the likelihood of a trivially inlineable function being independently compiled by the concurrent JIT
https://bugs.webkit.org/show_bug.cgi?id=116557

Reviewed by Geoffrey Garen.

This introduces a fairly comprehensive mechanism for preventing trivially inlineable
functions from being compiled independently of all of the things into which they end
up being inlined.

The trick is CodeBlock::m_shouldAlwaysBeInlined, or SABI for short (that's what the
debug logging calls it). A SABI function is one that we currently believe should
never be DFG optimized because it should always be inlined into the functions that
call it. SABI follows "innocent until proven guilty": all functions start out SABI
and have SABI set to false if we see proof that that function may be called in some
possibly non-inlineable way. So long as a function is SABI, it will not tier up to
the DFG: cti_optimize will perpetually postpone its optimization. Because SABI has
such a severe effect, we make the burden of proof of guilt quite low. SABI gets
cleared if any of the following happen:

- You get called from native code (either through CallData or CachedCall).

- You get called from an eval, since eval code takes a long time to get DFG
  optimized.

- You get called from global code, since often global code doesn't tier-up since
  it's run-once.

- You get called recursively, where recursion is detected by a stack walk of depth
  Options::maximumInliningDepth().

- You get called through an unlinked virtual call.

- You get called from DFG code, since if the caller was already DFG optimized and
  didn't inline you then obviously, you might not get inlined.

- You've tiered up to the baseline JIT and you get called from the interpreter.
  The idea here is that this kind of ensures that you stay SABI only if you're
  called no more frequently than any of your callers.

- You get called from a code block that isn't a DFG candidate.

- You aren't an inlining candidate.

Most of the heuristics for SABI are in CodeBlock::noticeIncomingCall().

This is neutral on SunSpider and V8Spider, and appears to be a slight speed-up on
V8v7, which was previously adversely affected by concurrent compilation. I also
confirmed that for example on V8/richards, it dramatically reduces the number of
code blocks that get DFG compiled. It is a speed-up on those V8v7 benchmarks that
saw regressions from concurrent compilation.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpAssumingJITType):
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::linkIncomingCall):
(JSC):
(JSC::CodeBlock::noticeIncomingCall):
* bytecode/CodeBlock.h:
(CodeBlock):
* dfg/DFGCapabilities.h:
(JSC::DFG::mightInlineFunction):
(DFG):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThread):
* dfg/DFGRepatch.cpp:
(JSC::DFG::dfgLinkFor):
* interpreter/Interpreter.cpp:
(JSC::Interpreter::executeCall):
(JSC::Interpreter::executeConstruct):
(JSC::Interpreter::prepareForRepeatCall):
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
(JSC::JIT::linkFor):
* jit/JIT.h:
(JIT):
* jit/JITStubs.cpp:
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::setUpCall):

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@153180 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 07f66d4a
2013-05-23 Filip Pizlo <fpizlo@apple.com>
fourthTier: add heuristics to reduce the likelihood of a trivially inlineable function being independently compiled by the concurrent JIT
https://bugs.webkit.org/show_bug.cgi?id=116557
Reviewed by Geoffrey Garen.
This introduces a fairly comprehensive mechanism for preventing trivially inlineable
functions from being compiled independently of all of the things into which they end
up being inlined.
The trick is CodeBlock::m_shouldAlwaysBeInlined, or SABI for short (that's what the
debug logging calls it). A SABI function is one that we currently believe should
never be DFG optimized because it should always be inlined into the functions that
call it. SABI follows "innocent until proven guilty": all functions start out SABI
and have SABI set to false if we see proof that that function may be called in some
possibly non-inlineable way. So long as a function is SABI, it will not tier up to
the DFG: cti_optimize will perpetually postpone its optimization. Because SABI has
such a severe effect, we make the burden of proof of guilt quite low. SABI gets
cleared if any of the following happen:
- You get called from native code (either through CallData or CachedCall).
- You get called from an eval, since eval code takes a long time to get DFG
optimized.
- You get called from global code, since often global code doesn't tier-up since
it's run-once.
- You get called recursively, where recursion is detected by a stack walk of depth
Options::maximumInliningDepth().
- You get called through an unlinked virtual call.
- You get called from DFG code, since if the caller was already DFG optimized and
didn't inline you then obviously, you might not get inlined.
- You've tiered up to the baseline JIT and you get called from the interpreter.
The idea here is that this kind of ensures that you stay SABI only if you're
called no more frequently than any of your callers.
- You get called from a code block that isn't a DFG candidate.
- You aren't an inlining candidate.
Most of the heuristics for SABI are in CodeBlock::noticeIncomingCall().
This is neutral on SunSpider and V8Spider, and appears to be a slight speed-up on
V8v7, which was previously adversely affected by concurrent compilation. I also
confirmed that for example on V8/richards, it dramatically reduces the number of
code blocks that get DFG compiled. It is a speed-up on those V8v7 benchmarks that
saw regressions from concurrent compilation.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpAssumingJITType):
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::linkIncomingCall):
(JSC):
(JSC::CodeBlock::noticeIncomingCall):
* bytecode/CodeBlock.h:
(CodeBlock):
* dfg/DFGCapabilities.h:
(JSC::DFG::mightInlineFunction):
(DFG):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThread):
* dfg/DFGRepatch.cpp:
(JSC::DFG::dfgLinkFor):
* interpreter/Interpreter.cpp:
(JSC::Interpreter::executeCall):
(JSC::Interpreter::executeConstruct):
(JSC::Interpreter::prepareForRepeatCall):
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
(JSC::JIT::linkFor):
* jit/JIT.h:
(JIT):
* jit/JITStubs.cpp:
(JSC::DEFINE_STUB_FUNCTION):
(JSC::lazyLinkFor):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::setUpCall):
2013-05-23 Filip Pizlo <fpizlo@apple.com>
fourthTier: rationalize DFG::CapabilityLevel and DFGCapabilities.[h|cpp]
......
......@@ -112,6 +112,8 @@ void CodeBlock::dumpAssumingJITType(PrintStream& out, JITCode::JITType jitType)
out.print(inferredName(), "#", hash(), ":[", RawPointer(this), "->", RawPointer(ownerExecutable()), ", ", jitType, codeType());
if (codeType() == FunctionCode)
out.print(specializationKind());
if (m_shouldAlwaysBeInlined)
out.print(" (SABI)");
out.print("]");
}
......@@ -1569,6 +1571,7 @@ CodeBlock::CodeBlock(CopyParsedBlockTag, CodeBlock& other)
, m_numCalleeRegisters(other.m_numCalleeRegisters)
, m_numVars(other.m_numVars)
, m_isConstructor(other.m_isConstructor)
, m_shouldAlwaysBeInlined(true)
, m_unlinkedCode(*other.m_vm, other.m_ownerExecutable.get(), other.m_unlinkedCode.get())
, m_ownerExecutable(*other.m_vm, other.m_ownerExecutable.get(), other.m_ownerExecutable.get())
, m_vm(other.m_vm)
......@@ -1616,6 +1619,7 @@ CodeBlock::CodeBlock(ScriptExecutable* ownerExecutable, UnlinkedCodeBlock* unlin
, m_numCalleeRegisters(unlinkedCodeBlock->m_numCalleeRegisters)
, m_numVars(unlinkedCodeBlock->m_numVars)
, m_isConstructor(unlinkedCodeBlock->isConstructor())
, m_shouldAlwaysBeInlined(true)
, m_unlinkedCode(globalObject->vm(), ownerExecutable, unlinkedCodeBlock)
, m_ownerExecutable(globalObject->vm(), ownerExecutable, ownerExecutable)
, m_vm(unlinkedCodeBlock->vm())
......@@ -2587,6 +2591,12 @@ void CodeBlock::unlinkCalls()
}
}
void CodeBlock::linkIncomingCall(ExecState* callerFrame, CallLinkInfo* incoming)
{
noticeIncomingCall(callerFrame);
m_incomingCalls.push(incoming);
}
void CodeBlock::unlinkIncomingCalls()
{
#if ENABLE(LLINT)
......@@ -2602,6 +2612,12 @@ void CodeBlock::unlinkIncomingCalls()
#endif // ENABLE(JIT)
#if ENABLE(LLINT)
void CodeBlock::linkIncomingCall(ExecState* callerFrame, LLIntCallLinkInfo* incoming)
{
noticeIncomingCall(callerFrame);
m_incomingLLIntCalls.push(incoming);
}
Instruction* CodeBlock::adjustPCIfAtCallSite(Instruction* potentialReturnPC)
{
ASSERT(potentialReturnPC);
......@@ -2964,6 +2980,70 @@ JSGlobalObject* CodeBlock::globalObjectFor(CodeOrigin codeOrigin)
return jsCast<FunctionExecutable*>(codeOrigin.inlineCallFrame->executable.get())->generatedBytecode().globalObject();
}
void CodeBlock::noticeIncomingCall(ExecState* callerFrame)
{
CodeBlock* callerCodeBlock = callerFrame->codeBlock();
if (Options::verboseOSR())
dataLog("Noticing call link from ", *callerCodeBlock, " to ", *this, "\n");
if (!m_shouldAlwaysBeInlined)
return;
if (!hasBaselineJITProfiling())
return;
if (!DFG::mightInlineFunction(this))
return;
if (!canInline(m_capabilityLevelState))
return;
if (callerCodeBlock->jitType() == JITCode::InterpreterThunk) {
// If the caller is still in the interpreter, then we can't expect inlining to
// happen anytime soon. Assume it's profitable to optimize it separately. This
// ensures that a function is SABI only if it is called no more frequently than
// any of its callers.
m_shouldAlwaysBeInlined = false;
if (Options::verboseOSR())
dataLog(" Marking SABI because caller is in LLInt.\n");
return;
}
if (callerCodeBlock->codeType() != FunctionCode) {
// If the caller is either eval or global code, assume that that won't be
// optimized anytime soon. For eval code this is particularly true since we
// delay eval optimization by a *lot*.
m_shouldAlwaysBeInlined = false;
if (Options::verboseOSR())
dataLog(" Marking SABI because caller is not a function.\n");
return;
}
ExecState* frame = callerFrame;
for (unsigned i = Options::maximumInliningDepth(); i--; frame = frame->callerFrame()) {
if (frame->hasHostCallFrameFlag())
break;
if (frame->codeBlock() == this) {
// Recursive calls won't be inlined.
if (Options::verboseOSR())
dataLog(" Marking SABI because recursion was detected.\n");
m_shouldAlwaysBeInlined = false;
return;
}
}
RELEASE_ASSERT(callerCodeBlock->m_capabilityLevelState != DFG::CapabilityLevelNotSet);
if (canCompile(callerCodeBlock->m_capabilityLevelState))
return;
if (Options::verboseOSR())
dataLog(" Marking SABI because the caller is not a DFG candidate.\n");
m_shouldAlwaysBeInlined = false;
}
unsigned CodeBlock::reoptimizationRetryCounter() const
{
ASSERT(m_reoptimizationRetryCounter <= Options::reoptimizationRetryCounterMax());
......
/*
* Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 Apple Inc. All rights reserved.
* Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of Apple Computer, Inc. ("Apple") nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
* Copyright (C) 2008, 2009, 2010, 2011, 2012, 2013 Apple Inc. All rights reserved.
* Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of Apple Computer, Inc. ("Apple") nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY APPLE AND ITS CONTRIBUTORS "AS IS" AND ANY
* EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
* WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
* DISCLAIMED. IN NO EVENT SHALL APPLE OR ITS CONTRIBUTORS BE LIABLE FOR ANY
* DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
* (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
* ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef CodeBlock_h
#define CodeBlock_h
......@@ -218,14 +218,9 @@ public:
}
void unlinkCalls();
bool hasIncomingCalls() { return m_incomingCalls.begin() != m_incomingCalls.end(); }
void linkIncomingCall(CallLinkInfo* incoming)
{
m_incomingCalls.push(incoming);
}
void linkIncomingCall(ExecState* callerFrame, CallLinkInfo*);
bool isIncomingCallAlreadyLinked(CallLinkInfo* incoming)
{
return m_incomingCalls.isOnList(incoming);
......@@ -233,10 +228,7 @@ public:
#endif // ENABLE(JIT)
#if ENABLE(LLINT)
void linkIncomingCall(LLIntCallLinkInfo* incoming)
{
m_incomingLLIntCalls.push(incoming);
}
void linkIncomingCall(ExecState* callerFrame, LLIntCallLinkInfo*);
#endif // ENABLE(LLINT)
void unlinkIncomingCalls();
......@@ -920,6 +912,8 @@ public:
// concurrent compilation threads finish what they're doing.
ConcurrentJITLock m_lock;
bool m_shouldAlwaysBeInlined;
protected:
#if ENABLE(JIT)
virtual CompilationResult jitCompileImpl(ExecState*) = 0;
......@@ -936,7 +930,9 @@ protected:
private:
friend class DFGCodeBlocks;
void noticeIncomingCall(ExecState* callerFrame);
double optimizationThresholdScalingFactor();
#if ENABLE(JIT)
......
......@@ -118,6 +118,11 @@ inline bool mightInlineFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind
return mightInlineFunctionForConstruct(codeBlock);
}
inline bool mightInlineFunction(CodeBlock* codeBlock)
{
return mightInlineFunctionFor(codeBlock, codeBlock->specializationKind());
}
inline bool canInlineFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind kind, bool isClosureCall)
{
if (isClosureCall) {
......
......@@ -116,7 +116,7 @@ void Plan::compileInThread(LongLivedState& longLivedState)
pathName = "FTL";
break;
}
dataLog("Compiled ", *codeBlock, " with ", pathName, " in ", currentTimeMS() - before, " ms.\n");
dataLog("Optimized ", *codeBlock->alternative(), " with ", pathName, " in ", currentTimeMS() - before, " ms.\n");
}
}
......
......@@ -1129,6 +1129,10 @@ void dfgLinkFor(ExecState* exec, CallLinkInfo& callLinkInfo, CodeBlock* calleeCo
{
ASSERT(!callLinkInfo.stub);
// If you're being call-linked from a DFG caller then you obviously didn't get inlined.
if (calleeCodeBlock)
calleeCodeBlock->m_shouldAlwaysBeInlined = false;
CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock();
VM* vm = callerCodeBlock->vm();
......@@ -1140,7 +1144,7 @@ void dfgLinkFor(ExecState* exec, CallLinkInfo& callLinkInfo, CodeBlock* calleeCo
repatchBuffer.relink(callLinkInfo.hotPathOther, codePtr);
if (calleeCodeBlock)
calleeCodeBlock->linkIncomingCall(&callLinkInfo);
calleeCodeBlock->linkIncomingCall(exec->callerFrame(), &callLinkInfo);
if (kind == CodeForCall) {
repatchBuffer.relink(callLinkInfo.callReturnLocation, vm->getCTIStub(linkClosureCallThunkGenerator).code());
......
......@@ -992,6 +992,7 @@ JSValue Interpreter::executeCall(CallFrame* callFrame, JSObject* function, CallT
}
newCodeBlock = &callData.js.functionExecutable->generatedBytecodeForCall();
ASSERT(!!newCodeBlock);
newCodeBlock->m_shouldAlwaysBeInlined = false;
} else
newCodeBlock = 0;
......@@ -1070,6 +1071,7 @@ JSObject* Interpreter::executeConstruct(CallFrame* callFrame, JSObject* construc
}
newCodeBlock = &constructData.js.functionExecutable->generatedBytecodeForConstruct();
ASSERT(!!newCodeBlock);
newCodeBlock->m_shouldAlwaysBeInlined = false;
} else
newCodeBlock = 0;
......@@ -1137,6 +1139,7 @@ CallFrameClosure Interpreter::prepareForRepeatCall(FunctionExecutable* functionE
return CallFrameClosure();
}
CodeBlock* newCodeBlock = &functionExecutable->generatedBytecodeForCall();
newCodeBlock->m_shouldAlwaysBeInlined = false;
size_t argsCount = argumentCountIncludingThis;
......
......@@ -35,8 +35,7 @@ JSC::MacroAssemblerX86Common::SSE2CheckState JSC::MacroAssemblerX86Common::s_sse
#endif
#include "CodeBlock.h"
#include <wtf/CryptographicallyRandomNumber.h>
#include "DFGNode.h" // for DFG_SUCCESS_STATS
#include "DFGCapabilities.h"
#include "Interpreter.h"
#include "JITInlines.h"
#include "JITStubCall.h"
......@@ -47,6 +46,7 @@ JSC::MacroAssemblerX86Common::SSE2CheckState JSC::MacroAssemblerX86Common::s_sse
#include "RepatchBuffer.h"
#include "ResultType.h"
#include "SamplingTool.h"
#include <wtf/CryptographicallyRandomNumber.h>
using namespace std;
......@@ -587,6 +587,18 @@ PassRefPtr<JITCode> JIT::privateCompile(CodePtr* functionEntryArityCheck, JITCom
RELEASE_ASSERT_NOT_REACHED();
break;
}
switch (m_codeBlock->codeType()) {
case GlobalCode:
case EvalCode:
m_codeBlock->m_shouldAlwaysBeInlined = false;
break;
case FunctionCode:
// We could have already set it to false because we detected an uninlineable call.
// Don't override that observation.
m_codeBlock->m_shouldAlwaysBeInlined &= canInline(level) && DFG::mightInlineFunction(m_codeBlock);
break;
}
#endif
if (Options::showDisassembly() || m_vm->m_perBytecodeProfiler)
......@@ -670,6 +682,7 @@ PassRefPtr<JITCode> JIT::privateCompile(CodePtr* functionEntryArityCheck, JITCom
jump(functionBody);
arityCheck = label();
store8(TrustedImm32(0), &m_codeBlock->m_shouldAlwaysBeInlined);
preserveReturnAddressAfterCall(regT2);
emitPutToCallFrameHeader(regT2, JSStack::ReturnPC);
emitPutImmediateToCallFrameHeader(m_codeBlock, JSStack::CodeBlock);
......@@ -805,7 +818,7 @@ PassRefPtr<JITCode> JIT::privateCompile(CodePtr* functionEntryArityCheck, JITCom
return adoptRef(new DirectJITCode(result, JITCode::BaselineJIT));
}
void JIT::linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, JIT::CodePtr code, CallLinkInfo* callLinkInfo, VM* vm, CodeSpecializationKind kind)
void JIT::linkFor(ExecState* exec, JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, JIT::CodePtr code, CallLinkInfo* callLinkInfo, VM* vm, CodeSpecializationKind kind)
{
RepatchBuffer repatchBuffer(callerCodeBlock);
......@@ -815,7 +828,7 @@ void JIT::linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* cal
repatchBuffer.relink(callLinkInfo->hotPathOther, code);
if (calleeCodeBlock)
calleeCodeBlock->linkIncomingCall(callLinkInfo);
calleeCodeBlock->linkIncomingCall(exec, callLinkInfo);
// Patch the slow patch so we do not continue to try to link.
if (kind == CodeForCall) {
......
......@@ -393,7 +393,7 @@ namespace JSC {
return jit.privateCompilePatchGetArrayLength(returnAddress);
}
static void linkFor(JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, CodePtr, CallLinkInfo*, VM*, CodeSpecializationKind);
static void linkFor(ExecState*, JSFunction* callee, CodeBlock* callerCodeBlock, CodeBlock* calleeCodeBlock, CodePtr, CallLinkInfo*, VM*, CodeSpecializationKind);
static void linkSlowCall(CodeBlock* callerCodeBlock, CallLinkInfo*);
private:
......
......@@ -962,6 +962,12 @@ DEFINE_STUB_FUNCTION(void, optimize)
CodeBlock* codeBlock = callFrame->codeBlock();
unsigned bytecodeIndex = stackFrame.args[0].int32();
if (bytecodeIndex) {
// If we're attempting to OSR from a loop, assume that this should be
// separately optimized.
codeBlock->m_shouldAlwaysBeInlined = false;
}
if (Options::verboseOSR()) {
dataLog(
*codeBlock, ": Entered optimize with bytecodeIndex = ", bytecodeIndex,
......@@ -978,7 +984,15 @@ DEFINE_STUB_FUNCTION(void, optimize)
if (!codeBlock->checkIfOptimizationThresholdReached()) {
codeBlock->updateAllPredictions();
if (Options::verboseOSR())
dataLog("Choosing not to optimize ", *codeBlock, " yet.\n");
dataLog("Choosing not to optimize ", *codeBlock, " yet, because the threshold hasn't been reached.\n");
return;
}
if (codeBlock->m_shouldAlwaysBeInlined) {
codeBlock->updateAllPredictions();
codeBlock->optimizeAfterWarmUp();
if (Options::verboseOSR())
dataLog("Choosing not to optimize ", *codeBlock, " yet, because m_shouldAlwaysBeInlined == true.\n");
return;
}
......@@ -1034,8 +1048,12 @@ DEFINE_STUB_FUNCTION(void, optimize)
// code block. Obviously that's unfortunate and we'd rather not have that
// happen, but it can happen, and if it did then the jettisoning logic will
// have set our threshold appropriately and we have nothing left to do.
if (!codeBlock->hasOptimizedReplacement())
if (!codeBlock->hasOptimizedReplacement()) {
codeBlock->updateAllPredictions();
if (Options::verboseOSR())
dataLog("Code block ", *codeBlock, " was compiled but it doesn't have an optimized replacement.\n");
return;
}
} else if (codeBlock->hasOptimizedReplacement()) {
if (Options::verboseOSR())
dataLog("Considering OSR ", *codeBlock, " -> ", *codeBlock->replacement(), ".\n");
......@@ -1066,7 +1084,7 @@ DEFINE_STUB_FUNCTION(void, optimize)
if (Options::verboseOSR()) {
dataLog(
"Delaying optimization for ", *codeBlock,
" (in loop) because of insufficient profiling.\n");
" because of insufficient profiling.\n");
}
return;
}
......@@ -1336,12 +1354,12 @@ inline void* lazyLinkFor(CallFrame* callFrame, CodeSpecializationKind kind)
else
codePtr = functionExecutable->generatedJITCodeFor(kind)->addressForCall();
}
ConcurrentJITLocker locker(callFrame->callerFrame()->codeBlock()->m_lock);
if (!callLinkInfo->seenOnce())
callLinkInfo->setSeen();
else
JIT::linkFor(callee, callFrame->callerFrame()->codeBlock(), codeBlock, codePtr, callLinkInfo, &callFrame->vm(), kind);
JIT::linkFor(callFrame->callerFrame(), callee, callFrame->callerFrame()->codeBlock(), codeBlock, codePtr, callLinkInfo, &callFrame->vm(), kind);
return codePtr.executableAddress();
}
......
......@@ -1434,16 +1434,18 @@ inline SlowPathReturnType setUpCall(ExecState* execCallee, Instruction* pc, Code
if (!LLINT_ALWAYS_ACCESS_SLOW && callLinkInfo) {
ExecState* execCaller = execCallee->callerFrame();
CodeBlock* callerCodeBlock = execCaller->codeBlock();
ConcurrentJITLocker locker(execCaller->codeBlock()->m_lock);
ConcurrentJITLocker locker(callerCodeBlock->m_lock);
if (callLinkInfo->isOnList())
callLinkInfo->remove();
callLinkInfo->callee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
callLinkInfo->lastSeenCallee.set(vm, execCaller->codeBlock()->ownerExecutable(), callee);
callLinkInfo->callee.set(vm, callerCodeBlock->ownerExecutable(), callee);
callLinkInfo->lastSeenCallee.set(vm, callerCodeBlock->ownerExecutable(), callee);
callLinkInfo->machineCodeTarget = codePtr;
if (codeBlock)
codeBlock->linkIncomingCall(callLinkInfo);
codeBlock->linkIncomingCall(execCaller, callLinkInfo);
}
LLINT_CALL_RETURN(execCallee, pc, codePtr.executableAddress());
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment