Commit 9089acbe authored by fpizlo@apple.com's avatar fpizlo@apple.com

Get rid of forward exit on UInt32ToNumber by adding an op_unsigned bytecode instruction

https://bugs.webkit.org/show_bug.cgi?id=125553

Reviewed by Oliver Hunt.
        
UInt32ToNumber was a super complicated node because it had to do a speculation, but it
would do it after we already had computed the urshift. It couldn't just back to the
beginning of the urshift because the inputs to the urshift weren't necessarily live
anymore. We couldn't jump forward to the beginning of the next instruction because the
result of the urshift was not yet unsigned-converted.
        
For a while we solved this by forward-exiting in UInt32ToNumber. But that's really
gross and I want to get rid of all forward exits. They cause a lot of bugs.
        
We could also have turned UInt32ToNumber to a backwards exit by forcing the inputs to
the urshift to be live. I figure that this might be a bit too extreme.
        
So, I just created a new place that we can exit to: I split op_urshift into op_urshift
followed by op_unsigned. op_unsigned is an "unsigned cast" along the lines of what
UInt32ToNumber does. This allows me to get rid of all of the nastyness in the DFG for
forward exiting in UInt32ToNumber.
        
This patch enables massive code carnage in the DFG and FTL, and brings us closer to
eliminating one of the DFG's most confusing concepts. On the flipside, it does make the
bytecode slightly more complex (one new instruction). This is a profitable trade. We
want the DFG and FTL to trend towards simplicity, since they are both currently too
complicated.

* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpBytecode):
* bytecode/Opcode.h:
(JSC::padOpcodeName):
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::dumpInContext):
* bytecode/ValueRecovery.h:
(JSC::ValueRecovery::gpr):
* bytecompiler/NodesCodegen.cpp:
(JSC::BinaryOpNode::emitBytecode):
(JSC::emitReadModifyAssignment):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::toInt32):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGNodeType.h:
* dfg/DFGOSRExitCompiler32_64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::compileUInt32ToNumber):
* dfg/DFGSpeculativeJIT.h:
* dfg/DFGSpeculativeJIT32_64.cpp:
* dfg/DFGSpeculativeJIT64.cpp:
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild1):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild2):
* ftl/FTLFormattedValue.h:
(JSC::FTL::int32Value):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileUInt32ToNumber):
* ftl/FTLValueFormat.cpp:
(JSC::FTL::reboxAccordingToFormat):
(WTF::printInternal):
* ftl/FTLValueFormat.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):
* jit/JIT.h:
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_urshift):
(JSC::JIT::emitSlow_op_urshift):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* jit/JITArithmetic32_64.cpp:
(JSC::JIT::emitRightShift):
(JSC::JIT::emitRightShiftSlowCase):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/CommonSlowPaths.h:



git-svn-id: http://svn.webkit.org/repository/webkit/trunk@160587 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 21a0fdaa
2013-12-11 Filip Pizlo <fpizlo@apple.com>
Get rid of forward exit on UInt32ToNumber by adding an op_unsigned bytecode instruction
https://bugs.webkit.org/show_bug.cgi?id=125553
Reviewed by Oliver Hunt.
UInt32ToNumber was a super complicated node because it had to do a speculation, but it
would do it after we already had computed the urshift. It couldn't just back to the
beginning of the urshift because the inputs to the urshift weren't necessarily live
anymore. We couldn't jump forward to the beginning of the next instruction because the
result of the urshift was not yet unsigned-converted.
For a while we solved this by forward-exiting in UInt32ToNumber. But that's really
gross and I want to get rid of all forward exits. They cause a lot of bugs.
We could also have turned UInt32ToNumber to a backwards exit by forcing the inputs to
the urshift to be live. I figure that this might be a bit too extreme.
So, I just created a new place that we can exit to: I split op_urshift into op_urshift
followed by op_unsigned. op_unsigned is an "unsigned cast" along the lines of what
UInt32ToNumber does. This allows me to get rid of all of the nastyness in the DFG for
forward exiting in UInt32ToNumber.
This patch enables massive code carnage in the DFG and FTL, and brings us closer to
eliminating one of the DFG's most confusing concepts. On the flipside, it does make the
bytecode slightly more complex (one new instruction). This is a profitable trade. We
want the DFG and FTL to trend towards simplicity, since they are both currently too
complicated.
* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpBytecode):
* bytecode/Opcode.h:
(JSC::padOpcodeName):
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::dumpInContext):
* bytecode/ValueRecovery.h:
(JSC::ValueRecovery::gpr):
* bytecompiler/NodesCodegen.cpp:
(JSC::BinaryOpNode::emitBytecode):
(JSC::emitReadModifyAssignment):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::toInt32):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGNodeType.h:
* dfg/DFGOSRExitCompiler32_64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGOSRExitCompiler64.cpp:
(JSC::DFG::OSRExitCompiler::compileExit):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::compileUInt32ToNumber):
* dfg/DFGSpeculativeJIT.h:
* dfg/DFGSpeculativeJIT32_64.cpp:
* dfg/DFGSpeculativeJIT64.cpp:
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild1):
(JSC::DFG::StrengthReductionPhase::convertToIdentityOverChild2):
* ftl/FTLFormattedValue.h:
(JSC::FTL::int32Value):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileUInt32ToNumber):
* ftl/FTLValueFormat.cpp:
(JSC::FTL::reboxAccordingToFormat):
(WTF::printInternal):
* ftl/FTLValueFormat.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):
* jit/JIT.h:
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_urshift):
(JSC::JIT::emitSlow_op_urshift):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* jit/JITArithmetic32_64.cpp:
(JSC::JIT::emitRightShift):
(JSC::JIT::emitRightShiftSlowCase):
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emitSlow_op_unsigned):
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/CommonSlowPaths.h:
2013-12-13 Mark Hahnenberg <mhahnenberg@apple.com>
LLInt should not conditionally branch to to labels outside of its function
......
......@@ -156,7 +156,8 @@ void computeUsesForBytecodeOffset(
case op_new_array_with_size:
case op_create_this:
case op_get_pnames:
case op_del_by_id: {
case op_del_by_id:
case op_unsigned: {
functor(codeBlock, instruction, opcodeID, instruction[2].u.operand);
return;
}
......@@ -390,7 +391,8 @@ void computeDefsForBytecodeOffset(CodeBlock* codeBlock, unsigned bytecodeOffset,
case op_create_activation:
case op_create_arguments:
case op_del_by_id:
case op_del_by_val: {
case op_del_by_val:
case op_unsigned: {
functor(codeBlock, instruction, opcodeID, instruction[1].u.operand);
return;
}
......
......@@ -901,6 +901,10 @@ void CodeBlock::dumpBytecode(PrintStream& out, ExecState* exec, const Instructio
out.printf("%s, %s, %s", registerName(r0).data(), registerName(r1).data(), registerName(r2).data());
break;
}
case op_unsigned: {
printUnaryOp(out, exec, location, it, "unsigned");
break;
}
case op_typeof: {
printUnaryOp(out, exec, location, it, "typeof");
break;
......
......@@ -82,6 +82,7 @@ namespace JSC {
macro(op_lshift, 4) \
macro(op_rshift, 4) \
macro(op_urshift, 4) \
macro(op_unsigned, 3) \
macro(op_bitand, 5) \
macro(op_bitxor, 5) \
macro(op_bitor, 5) \
......
......@@ -83,9 +83,6 @@ void ValueRecovery::dumpInContext(PrintStream& out, DumpContext* context) const
case UnboxedCellInGPR:
out.print("cell(", gpr(), ")");
return;
case UInt32InGPR:
out.print("uint32(", gpr(), ")");
return;
case InFPR:
out.print(fpr());
return;
......
......@@ -54,7 +54,6 @@ enum ValueRecoveryTechnique {
InPair,
#endif
InFPR,
UInt32InGPR,
// It's in the stack, but at a different location.
DisplacedInJSStack,
// It's in the stack, at a different location, and it's unboxed.
......@@ -105,14 +104,6 @@ public:
return result;
}
static ValueRecovery uint32InGPR(MacroAssembler::RegisterID gpr)
{
ValueRecovery result;
result.m_technique = UInt32InGPR;
result.m_source.gpr = gpr;
return result;
}
#if USE(JSVALUE32_64)
static ValueRecovery inPair(MacroAssembler::RegisterID tagGPR, MacroAssembler::RegisterID payloadGPR)
{
......@@ -209,7 +200,7 @@ public:
MacroAssembler::RegisterID gpr() const
{
ASSERT(m_technique == InGPR || m_technique == UnboxedInt32InGPR || m_technique == UnboxedBooleanInGPR || m_technique == UInt32InGPR || m_technique == UnboxedInt52InGPR || m_technique == UnboxedStrictInt52InGPR || m_technique == UnboxedCellInGPR);
ASSERT(m_technique == InGPR || m_technique == UnboxedInt32InGPR || m_technique == UnboxedBooleanInGPR || m_technique == UnboxedInt52InGPR || m_technique == UnboxedStrictInt52InGPR || m_technique == UnboxedCellInGPR);
return m_source.gpr;
}
......
......@@ -1168,7 +1168,10 @@ RegisterID* BinaryOpNode::emitBytecode(BytecodeGenerator& generator, RegisterID*
RELEASE_ASSERT_NOT_REACHED();
return generator.emitUnaryOp(op_not, generator.finalDestination(dst, tmp.get()), tmp.get());
}
return generator.emitBinaryOp(opcodeID, generator.finalDestination(dst, src1.get()), src1.get(), src2, OperandTypes(left->resultDescriptor(), right->resultDescriptor()));
RegisterID* result = generator.emitBinaryOp(opcodeID, generator.finalDestination(dst, src1.get()), src1.get(), src2, OperandTypes(left->resultDescriptor(), right->resultDescriptor()));
if (opcodeID == op_urshift && dst != generator.ignoredResult())
return generator.emitUnaryOp(op_unsigned, result, result);
return result;
}
RegisterID* EqualNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
......@@ -1335,7 +1338,10 @@ static ALWAYS_INLINE RegisterID* emitReadModifyAssignment(BytecodeGenerator& gen
// If this is required the node is passed as 'emitExpressionInfoForMe'; do so now.
if (emitExpressionInfoForMe)
generator.emitExpressionInfo(emitExpressionInfoForMe->divot(), emitExpressionInfoForMe->divotStart(), emitExpressionInfoForMe->divotEnd());
return generator.emitBinaryOp(opcodeID, dst, src1, src2, types);
RegisterID* result = generator.emitBinaryOp(opcodeID, dst, src1, src2, types);
if (oper == OpURShift)
return generator.emitUnaryOp(op_unsigned, result, result);
return result;
}
RegisterID* ReadModifyResolveNode::emitBytecode(BytecodeGenerator& generator, RegisterID* dst)
......
......@@ -513,9 +513,6 @@ private:
if (node->hasInt32Result())
return node;
if (node->op() == UInt32ToNumber)
return node->child1().node();
// Check for numeric constants boxed as JSValues.
if (canFold(node)) {
JSValue v = valueOfJSConstant(node);
......@@ -2050,55 +2047,32 @@ bool ByteCodeParser::parseBlock(unsigned limit)
case op_rshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
Node* result;
// Optimize out shifts by zero.
if (isInt32Constant(op2) && !(valueOfInt32Constant(op2) & 0x1f))
result = op1;
else
result = addToGraph(BitRShift, op1, op2);
set(VirtualRegister(currentInstruction[1].u.operand), result);
set(VirtualRegister(currentInstruction[1].u.operand),
addToGraph(BitRShift, op1, op2));
NEXT_OPCODE(op_rshift);
}
case op_lshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
Node* result;
// Optimize out shifts by zero.
if (isInt32Constant(op2) && !(valueOfInt32Constant(op2) & 0x1f))
result = op1;
else
result = addToGraph(BitLShift, op1, op2);
set(VirtualRegister(currentInstruction[1].u.operand), result);
set(VirtualRegister(currentInstruction[1].u.operand),
addToGraph(BitLShift, op1, op2));
NEXT_OPCODE(op_lshift);
}
case op_urshift: {
Node* op1 = getToInt32(currentInstruction[2].u.operand);
Node* op2 = getToInt32(currentInstruction[3].u.operand);
Node* result;
// The result of a zero-extending right shift is treated as an unsigned value.
// This means that if the top bit is set, the result is not in the int32 range,
// and as such must be stored as a double. If the shift amount is a constant,
// we may be able to optimize.
if (isInt32Constant(op2)) {
// If we know we are shifting by a non-zero amount, then since the operation
// zero fills we know the top bit of the result must be zero, and as such the
// result must be within the int32 range. Conversely, if this is a shift by
// zero, then the result may be changed by the conversion to unsigned, but it
// is not necessary to perform the shift!
if (valueOfInt32Constant(op2) & 0x1f)
result = addToGraph(BitURShift, op1, op2);
else
result = makeSafe(addToGraph(UInt32ToNumber, op1));
} else {
// Cannot optimize at this stage; shift & potentially rebox as a double.
result = addToGraph(BitURShift, op1, op2);
result = makeSafe(addToGraph(UInt32ToNumber, result));
}
set(VirtualRegister(currentInstruction[1].u.operand), result);
set(VirtualRegister(currentInstruction[1].u.operand),
addToGraph(BitURShift, op1, op2));
NEXT_OPCODE(op_urshift);
}
case op_unsigned: {
set(VirtualRegister(currentInstruction[1].u.operand),
makeSafe(addToGraph(UInt32ToNumber, getToInt32(currentInstruction[2].u.operand))));
NEXT_OPCODE(op_unsigned);
}
// === Increment/Decrement opcodes ===
......
......@@ -91,6 +91,7 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, Instruc
case op_rshift:
case op_lshift:
case op_urshift:
case op_unsigned:
case op_inc:
case op_dec:
case op_add:
......
......@@ -118,6 +118,8 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
case Int52ToValue:
case CheckInBounds:
case ConstantStoragePointer:
case UInt32ToNumber:
case DoubleAsInt32:
return;
case MovHintAndCheck:
......@@ -168,15 +170,6 @@ void clobberize(Graph& graph, Node* node, ReadFunctor& read, WriteFunctor& write
read(Watchpoint_fire);
return;
// These are forward-exiting nodes that assume that the subsequent instruction
// is a MovHint, and they try to roll forward over this MovHint in their
// execution. This makes hoisting them impossible without additional magic. We
// may add such magic eventually, but just not yet.
case UInt32ToNumber:
case DoubleAsInt32:
write(SideState);
return;
case ToThis:
case CreateThis:
read(MiscFields);
......
......@@ -112,6 +112,8 @@ private:
case UInt32ToNumber: {
fixEdge<KnownInt32Use>(node->child1());
if (bytecodeCanTruncateInteger(node->arithNodeFlags()))
node->convertToIdentity();
break;
}
......
......@@ -105,7 +105,7 @@ namespace JSC { namespace DFG {
/* Bitwise operators call ToInt32 on their operands. */\
macro(ValueToInt32, NodeResultInt32) \
/* Used to box the result of URShift nodes (result has range 0..2^32-1). */\
macro(UInt32ToNumber, NodeResultNumber | NodeExitsForward) \
macro(UInt32ToNumber, NodeResultNumber) \
\
/* Used to cast known integers to doubles, so as to separate the double form */\
/* of the value from the integer form. */\
......
......@@ -177,7 +177,6 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
switch (recovery.technique()) {
case UnboxedInt32InGPR:
case UInt32InGPR:
case UnboxedBooleanInGPR:
case UnboxedCellInGPR:
m_jit.store32(
......@@ -317,28 +316,6 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
AssemblyHelpers::payloadFor(operand));
break;
case UInt32InGPR: {
m_jit.load32(
&bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
GPRInfo::regT0);
AssemblyHelpers::Jump positive = m_jit.branch32(
AssemblyHelpers::GreaterThanOrEqual,
GPRInfo::regT0, AssemblyHelpers::TrustedImm32(0));
m_jit.convertInt32ToDouble(GPRInfo::regT0, FPRInfo::fpRegT0);
m_jit.addDouble(
AssemblyHelpers::AbsoluteAddress(&AssemblyHelpers::twoToThe32),
FPRInfo::fpRegT0);
m_jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
AssemblyHelpers::Jump done = m_jit.jump();
positive.link(&m_jit);
m_jit.store32(GPRInfo::regT0, AssemblyHelpers::payloadFor(operand));
m_jit.store32(
AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
AssemblyHelpers::tagFor(operand));
done.link(&m_jit);
break;
}
case Constant:
m_jit.store32(
AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
......
......@@ -185,7 +185,6 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
switch (recovery.technique()) {
case InGPR:
case UnboxedInt32InGPR:
case UInt32InGPR:
case UnboxedInt52InGPR:
case UnboxedStrictInt52InGPR:
case UnboxedCellInGPR:
......@@ -283,13 +282,6 @@ void OSRExitCompiler::compileExit(const OSRExit& exit, const Operands<ValueRecov
m_jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
break;
case UInt32InGPR:
m_jit.load64(scratch + index, GPRInfo::regT0);
m_jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
m_jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
m_jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
break;
case InFPR:
case DoubleDisplacedInJSStack:
m_jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
......
......@@ -195,6 +195,8 @@ private:
}
case UInt32ToNumber: {
// FIXME: Support Int52.
// https://bugs.webkit.org/show_bug.cgi?id=125704
if (nodeCanSpeculateInt32(node->arithNodeFlags()))
changed |= mergePrediction(SpecInt32);
else
......
......@@ -1425,9 +1425,6 @@ void SpeculativeJIT::compileMovHint(Node* node)
Node* child = node->child1().node();
noticeOSRBirth(child);
if (child->op() == UInt32ToNumber)
noticeOSRBirth(child->child1().node());
m_stream->appendAndLog(VariableEvent::movHint(MinifiedID(child), node->local()));
}
......@@ -2160,18 +2157,15 @@ void SpeculativeJIT::compileUInt32ToNumber(Node* node)
doubleResult(outputFPR, node);
return;
}
RELEASE_ASSERT(!bytecodeCanTruncateInteger(node->arithNodeFlags()));
SpeculateInt32Operand op1(this, node->child1());
GPRTemporary result(this); // For the benefit of OSR exit, force these to be in different registers. In reality the OSR exit compiler could find cases where you have uint32(%r1) followed by int32(%r1) and then use different registers, but that seems like too much effort.
GPRTemporary result(this);
m_jit.move(op1.gpr(), result.gpr());
// Test the operand is positive. This is a very special speculation check - we actually
// use roll-forward speculation here, where if this fails, we jump to the baseline
// instruction that follows us, rather than the one we're executing right now. We have
// to do this because by this point, the original values necessary to compile whatever
// operation the UInt32ToNumber originated from might be dead.
forwardSpeculationCheck(Overflow, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::LessThan, result.gpr(), TrustedImm32(0)), ValueRecovery::uint32InGPR(result.gpr()));
speculationCheck(Overflow, JSValueRegs(), 0, m_jit.branch32(MacroAssembler::LessThan, result.gpr(), TrustedImm32(0)));
int32Result(result.gpr(), node, op1.format());
}
......
......@@ -698,8 +698,6 @@ public:
void compileMovHint(Node*);
void compileMovHintAndCheck(Node*);
void nonSpeculativeUInt32ToNumber(Node*);
#if USE(JSVALUE64)
void cachedGetById(CodeOrigin, GPRReg baseGPR, GPRReg resultGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget = JITCompiler::Jump(), SpillRegistersMode = NeedToSpill);
void cachedPutById(CodeOrigin, GPRReg base, GPRReg value, Edge valueUse, GPRReg scratchGPR, unsigned identifierNumber, PutKind, JITCompiler::Jump slowPathTarget = JITCompiler::Jump());
......
......@@ -168,33 +168,6 @@ bool SpeculativeJIT::fillJSValue(Edge edge, GPRReg& tagGPR, GPRReg& payloadGPR,
}
}
void SpeculativeJIT::nonSpeculativeUInt32ToNumber(Node* node)
{
SpeculateInt32Operand op1(this, node->child1());
FPRTemporary boxer(this);
GPRTemporary resultTag(this, Reuse, op1);
GPRTemporary resultPayload(this);
JITCompiler::Jump positive = m_jit.branch32(MacroAssembler::GreaterThanOrEqual, op1.gpr(), TrustedImm32(0));
m_jit.convertInt32ToDouble(op1.gpr(), boxer.fpr());
m_jit.move(JITCompiler::TrustedImmPtr(&AssemblyHelpers::twoToThe32), resultPayload.gpr()); // reuse resultPayload register here.
m_jit.addDouble(JITCompiler::Address(resultPayload.gpr(), 0), boxer.fpr());
boxDouble(boxer.fpr(), resultTag.gpr(), resultPayload.gpr());
JITCompiler::Jump done = m_jit.jump();
positive.link(&m_jit);
m_jit.move(TrustedImm32(JSValue::Int32Tag), resultTag.gpr());
m_jit.move(op1.gpr(), resultPayload.gpr());
done.link(&m_jit);
jsValueResult(resultTag.gpr(), resultPayload.gpr(), node);
}
void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseTagGPROrNone, GPRReg basePayloadGPR, GPRReg resultTagGPR, GPRReg resultPayloadGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
{
JITGetByIdGenerator gen(
......
......@@ -186,30 +186,6 @@ GPRReg SpeculativeJIT::fillJSValue(Edge edge)
}
}
void SpeculativeJIT::nonSpeculativeUInt32ToNumber(Node* node)
{
SpeculateInt32Operand op1(this, node->child1());
FPRTemporary boxer(this);
GPRTemporary result(this, Reuse, op1);
JITCompiler::Jump positive = m_jit.branch32(MacroAssembler::GreaterThanOrEqual, op1.gpr(), TrustedImm32(0));
m_jit.convertInt32ToDouble(op1.gpr(), boxer.fpr());
m_jit.addDouble(JITCompiler::AbsoluteAddress(&AssemblyHelpers::twoToThe32), boxer.fpr());
boxDouble(boxer.fpr(), result.gpr());
JITCompiler::Jump done = m_jit.jump();
positive.link(&m_jit);
m_jit.or64(GPRInfo::tagTypeNumberRegister, op1.gpr(), result.gpr());
done.link(&m_jit);
jsValueResult(result.gpr(), m_currentNode);
}
void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg resultGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode spillMode)
{
JITGetByIdGenerator gen(
......
......@@ -70,14 +70,40 @@ private:
{
switch (m_node->op()) {
case BitOr:
// Optimize X|0 -> X.
if (m_node->child1()->isConstant()) {
JSValue op1 = m_graph.valueOfJSConstant(m_node->child1().node());
if (op1.isInt32() && !op1.asInt32()) {
convertToIdentityOverChild2();
break;
}
}
if (m_node->child2()->isConstant()) {
JSValue C2 = m_graph.valueOfJSConstant(m_node->child2().node());
if (C2.isInt32() && !C2.asInt32()) {
m_insertionSet.insertNode(
m_nodeIndex, SpecNone, Phantom, m_node->codeOrigin,
m_node->child2());
m_node->children.removeEdge(1);
JSValue op2 = m_graph.valueOfJSConstant(m_node->child2().node());
if (op2.isInt32() && !op2.asInt32()) {
convertToIdentityOverChild1();
break;
}
}
break;
case BitLShift:
case BitRShift:
case BitURShift:
if (m_node->child2()->isConstant()) {
JSValue op2 = m_graph.valueOfJSConstant(m_node->child2().node());
if (op2.isInt32() && !(op2.asInt32() & 0x1f)) {
convertToIdentityOverChild1();
break;
}
}
break;
case UInt32ToNumber:
if (m_node->child1()->op() == BitURShift
&& m_node->child1()->child2()->isConstant()) {
JSValue shiftAmount = m_graph.valueOfJSConstant(
m_node->child1()->child2().node());
if (shiftAmount.isInt32() && (shiftAmount.asInt32() & 0x1f)) {
m_node->convertToIdentity();
m_changed = true;
break;
......@@ -116,6 +142,25 @@ private:
break;
}
}
void convertToIdentityOverChild(unsigned childIndex)
{
m_insertionSet.insertNode(
m_nodeIndex, SpecNone, Phantom, m_node->codeOrigin, m_node->children);
m_node->children.removeEdge(childIndex ^ 1);
m_node->convertToIdentity();
m_changed = true;
}
void convertToIdentityOverChild1()
{
convertToIdentityOverChild(0);
}
void convertToIdentityOverChild2()
{
convertToIdentityOverChild(1);
}
void foldTypedArrayPropertyToConstant(JSArrayBufferView* view, JSValue constant)
{
......
......@@ -72,7 +72,6 @@ private:
static inline FormattedValue noValue() { return FormattedValue(); }
static inline FormattedValue int32Value(LValue value) { return FormattedValue(ValueFormatInt32, value); }
static inline FormattedValue uInt32Value(LValue value) { return FormattedValue(ValueFormatUInt32, value); }
static inline FormattedValue booleanValue(LValue value) { return FormattedValue(ValueFormatBoolean, value); }
static inline FormattedValue jsValueValue(LValue value) { return FormattedValue(ValueFormatJSValue, value); }
static inline FormattedValue doubleValue(LValue value) { return FormattedValue(ValueFormatDouble, value); }
......
......@@ -1171,9 +1171,7 @@ private:
return;
}
speculateForward(
Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero),
FormattedValue(ValueFormatUInt32, value));
speculate(Overflow, noValue(), 0, m_out.lessThan(value, m_out.int32Zero));
setInt32(value);