Commit c2748329 authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

Split MarkedSpace into destructor and destructor-free subspaces

https://bugs.webkit.org/show_bug.cgi?id=77761

Reviewed by Geoffrey Garen.

Source/JavaScriptCore: 

* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::emitAllocateJSFinalObject): Switched over to use destructor-free space.
* heap/Heap.h:
(JSC::Heap::allocatorForObjectWithoutDestructor): Added to give clients (e.g. the JIT) the ability to 
pick which subspace they want to allocate out of.
(JSC::Heap::allocatorForObjectWithDestructor): Ditto.
(Heap):
(JSC::Heap::allocateWithDestructor): Added private function for CellAllocator to use.
(JSC):
(JSC::Heap::allocateWithoutDestructor): Ditto.
* heap/MarkedAllocator.cpp: Added the cellsNeedDestruction flag to allocators so that they can allocate 
their MarkedBlocks correctly.
(JSC::MarkedAllocator::allocateBlock):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::cellsNeedDestruction):
(MarkedAllocator):
(JSC::MarkedAllocator::MarkedAllocator):
(JSC):
(JSC::MarkedAllocator::init): Replaced custom set functions, which were only used upon initialization, with
an init function that does all of that stuff in fewer lines.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::create):
(JSC::MarkedBlock::recycle):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::callDestructor): Templatized, along with specializedSweep and sweepHelper, to make 
checking the m_cellsNeedDestructor flag faster and cleaner looking.
(JSC):
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep):
(JSC::MarkedBlock::sweepHelper):
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::cellsNeedDestruction):
(JSC):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::canonicalizeCellLivenessData):
(JSC::TakeIfUnmarked::operator()):
* heap/MarkedSpace.h:
(MarkedSpace):
(Subspace):
(JSC::MarkedSpace::allocatorFor): Needed function to differentiate between the two broad subspaces of 
allocators.
(JSC):
(JSC::MarkedSpace::destructorAllocatorFor): Ditto.
(JSC::MarkedSpace::allocateWithoutDestructor): Ditto.
(JSC::MarkedSpace::allocateWithDestructor): Ditto.
(JSC::MarkedSpace::forEachBlock):
* jit/JIT.h:
* jit/JITInlineMethods.h: Modified to use the proper allocator for JSFinalObjects and others.
(JSC::JIT::emitAllocateBasicJSObject):
(JSC::JIT::emitAllocateJSFinalObject):
(JSC::JIT::emitAllocateJSFunction):
* runtime/JSArray.cpp:
(JSC):
* runtime/JSArray.h:
(JSArray):
(JSC::JSArray::create):
(JSC):
(JSC::JSArray::tryCreateUninitialized):
* runtime/JSCell.h:
(JSCell):
(JSC):
(NeedsDestructor): Template struct that calculates at compile time whether the class in question requires 
destruction or not using the compiler type trait __has_trivial_destructor. allocateCell then checks this 
constant to decide whether to allocate in the destructor or destructor-free parts of the heap.
(JSC::allocateCell): 
* runtime/JSFunction.cpp:
(JSC):
* runtime/JSFunction.h:
(JSFunction):
* runtime/JSObject.cpp:
(JSC):
* runtime/JSObject.h:
(JSNonFinalObject):
(JSC):
(JSFinalObject):
(JSC::JSFinalObject::create):

Source/WebCore: 

No new tests.

* bindings/js/JSDOMWindowShell.cpp: Removed old operator new, which was just used in the create
function so that we can use allocateCell instead.
(WebCore):
* bindings/js/JSDOMWindowShell.h:
(WebCore::JSDOMWindowShell::create):
(JSDOMWindowShell):
* bindings/scripts/CodeGeneratorJS.pm: Added destructor back to root JS DOM nodes (e.g. JSNode, etc)
because their destroy functions need to be called, so we don't want the NeedsDestructor struct to 
think they don't need destruction due to having empty/trivial destructors.
Removed ASSERT_HAS_TRIVIAL_DESTRUCTOR from all JS DOM wrapper auto-generated objects because their 
ancestors now have non-trivial destructors. 
(GenerateHeader):
(GenerateImplementation):
(GenerateConstructorDefinition):


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@107445 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent e3ab598b
2012-02-10 Mark Hahnenberg <mhahnenberg@apple.com>
Split MarkedSpace into destructor and destructor-free subspaces
https://bugs.webkit.org/show_bug.cgi?id=77761
Reviewed by Geoffrey Garen.
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::emitAllocateJSFinalObject): Switched over to use destructor-free space.
* heap/Heap.h:
(JSC::Heap::allocatorForObjectWithoutDestructor): Added to give clients (e.g. the JIT) the ability to
pick which subspace they want to allocate out of.
(JSC::Heap::allocatorForObjectWithDestructor): Ditto.
(Heap):
(JSC::Heap::allocateWithDestructor): Added private function for CellAllocator to use.
(JSC):
(JSC::Heap::allocateWithoutDestructor): Ditto.
* heap/MarkedAllocator.cpp: Added the cellsNeedDestruction flag to allocators so that they can allocate
their MarkedBlocks correctly.
(JSC::MarkedAllocator::allocateBlock):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::cellsNeedDestruction):
(MarkedAllocator):
(JSC::MarkedAllocator::MarkedAllocator):
(JSC):
(JSC::MarkedAllocator::init): Replaced custom set functions, which were only used upon initialization, with
an init function that does all of that stuff in fewer lines.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::create):
(JSC::MarkedBlock::recycle):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::callDestructor): Templatized, along with specializedSweep and sweepHelper, to make
checking the m_cellsNeedDestructor flag faster and cleaner looking.
(JSC):
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep):
(JSC::MarkedBlock::sweepHelper):
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::cellsNeedDestruction):
(JSC):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::canonicalizeCellLivenessData):
(JSC::TakeIfUnmarked::operator()):
* heap/MarkedSpace.h:
(MarkedSpace):
(Subspace):
(JSC::MarkedSpace::allocatorFor): Needed function to differentiate between the two broad subspaces of
allocators.
(JSC):
(JSC::MarkedSpace::destructorAllocatorFor): Ditto.
(JSC::MarkedSpace::allocateWithoutDestructor): Ditto.
(JSC::MarkedSpace::allocateWithDestructor): Ditto.
(JSC::MarkedSpace::forEachBlock):
* jit/JIT.h:
* jit/JITInlineMethods.h: Modified to use the proper allocator for JSFinalObjects and others.
(JSC::JIT::emitAllocateBasicJSObject):
(JSC::JIT::emitAllocateJSFinalObject):
(JSC::JIT::emitAllocateJSFunction):
* runtime/JSArray.cpp:
(JSC):
* runtime/JSArray.h:
(JSArray):
(JSC::JSArray::create):
(JSC):
(JSC::JSArray::tryCreateUninitialized):
* runtime/JSCell.h:
(JSCell):
(JSC):
(NeedsDestructor): Template struct that calculates at compile time whether the class in question requires
destruction or not using the compiler type trait __has_trivial_destructor. allocateCell then checks this
constant to decide whether to allocate in the destructor or destructor-free parts of the heap.
(JSC::allocateCell):
* runtime/JSFunction.cpp:
(JSC):
* runtime/JSFunction.h:
(JSFunction):
* runtime/JSObject.cpp:
(JSC):
* runtime/JSObject.h:
(JSNonFinalObject):
(JSC):
(JSFinalObject):
(JSC::JSFinalObject::create):
2012-02-10 Adrienne Walker <enne@google.com>
Remove implicit copy constructor usage in HashMaps with OwnPtr
......@@ -1565,7 +1565,7 @@ private:
template<typename T>
void emitAllocateJSFinalObject(T structure, GPRReg resultGPR, GPRReg scratchGPR, MacroAssembler::JumpList& slowPath)
{
MarkedAllocator* allocator = &m_jit.globalData()->heap.allocatorForObject(sizeof(JSFinalObject));
MarkedAllocator* allocator = &m_jit.globalData()->heap.allocatorForObjectWithoutDestructor(sizeof(JSFinalObject));
m_jit.loadPtr(&allocator->m_firstFreeCell, resultGPR);
slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, resultGPR));
......
......@@ -95,8 +95,8 @@ namespace JSC {
// true if an allocation or collection is in progress
inline bool isBusy();
MarkedAllocator& allocatorForObject(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
void* allocate(size_t);
MarkedAllocator& allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
MarkedAllocator& allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
CheckedBoolean tryAllocateStorage(size_t, void**);
CheckedBoolean tryReallocateStorage(void**, size_t, size_t);
......@@ -142,6 +142,10 @@ namespace JSC {
friend class BumpSpace;
friend class SlotVisitor;
friend class CodeBlock;
template<typename T> friend void* allocateCell(Heap&);
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
size_t waterMark();
size_t highWaterMark();
......@@ -334,10 +338,16 @@ namespace JSC {
return forEachProtectedCell(functor);
}
inline void* Heap::allocate(size_t bytes)
inline void* Heap::allocateWithDestructor(size_t bytes)
{
ASSERT(isValidAllocation(bytes));
return m_objectSpace.allocateWithDestructor(bytes);
}
inline void* Heap::allocateWithoutDestructor(size_t bytes)
{
ASSERT(isValidAllocation(bytes));
return m_objectSpace.allocate(bytes);
return m_objectSpace.allocateWithoutDestructor(bytes);
}
inline CheckedBoolean Heap::tryAllocateStorage(size_t bytes, void** outPtr)
......
......@@ -97,11 +97,11 @@ MarkedBlock* MarkedAllocator::allocateBlock(AllocationEffort allocationEffort)
block = 0;
}
if (block)
block = MarkedBlock::recycle(block, m_heap, m_cellSize);
block = MarkedBlock::recycle(block, m_heap, m_cellSize, m_cellsNeedDestruction);
else if (allocationEffort == AllocationCanFail)
return 0;
else
block = MarkedBlock::create(m_heap, m_cellSize);
block = MarkedBlock::create(m_heap, m_cellSize, m_cellsNeedDestruction);
m_markedSpace->didAddBlock(block);
......
......@@ -22,6 +22,7 @@ public:
void reset();
void zapFreeList();
size_t cellSize() { return m_cellSize; }
bool cellsNeedDestruction() { return m_cellsNeedDestruction; }
void* allocate();
Heap* heap() { return m_heap; }
......@@ -29,9 +30,7 @@ public:
void addBlock(MarkedBlock*);
void removeBlock(MarkedBlock*);
void setHeap(Heap* heap) { m_heap = heap; }
void setCellSize(size_t cellSize) { m_cellSize = cellSize; }
void setMarkedSpace(MarkedSpace* space) { m_markedSpace = space; }
void init(Heap*, MarkedSpace*, size_t cellSize, bool cellsNeedDestruction);
private:
JS_EXPORT_PRIVATE void* allocateSlowCase();
......@@ -43,6 +42,7 @@ private:
MarkedBlock* m_currentBlock;
DoublyLinkedList<HeapBlock> m_blockList;
size_t m_cellSize;
bool m_cellsNeedDestruction;
Heap* m_heap;
MarkedSpace* m_markedSpace;
};
......@@ -51,11 +51,20 @@ inline MarkedAllocator::MarkedAllocator()
: m_firstFreeCell(0)
, m_currentBlock(0)
, m_cellSize(0)
, m_cellsNeedDestruction(true)
, m_heap(0)
, m_markedSpace(0)
{
}
inline void MarkedAllocator::init(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, bool cellsNeedDestruction)
{
m_heap = heap;
m_markedSpace = markedSpace;
m_cellSize = cellSize;
m_cellsNeedDestruction = cellsNeedDestruction;
}
inline void* MarkedAllocator::allocate()
{
MarkedBlock::FreeCell* firstFreeCell = m_firstFreeCell;
......
......@@ -32,17 +32,17 @@
namespace JSC {
MarkedBlock* MarkedBlock::create(Heap* heap, size_t cellSize)
MarkedBlock* MarkedBlock::create(Heap* heap, size_t cellSize, bool cellsNeedDestruction)
{
PageAllocationAligned allocation = PageAllocationAligned::allocate(blockSize, blockSize, OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation))
CRASH();
return new (NotNull, allocation.base()) MarkedBlock(allocation, heap, cellSize);
return new (NotNull, allocation.base()) MarkedBlock(allocation, heap, cellSize, cellsNeedDestruction);
}
MarkedBlock* MarkedBlock::recycle(MarkedBlock* block, Heap* heap, size_t cellSize)
MarkedBlock* MarkedBlock::recycle(MarkedBlock* block, Heap* heap, size_t cellSize, bool cellsNeedDestruction)
{
return new (NotNull, block) MarkedBlock(block->m_allocation, heap, cellSize);
return new (NotNull, block) MarkedBlock(block->m_allocation, heap, cellSize, cellsNeedDestruction);
}
void MarkedBlock::destroy(MarkedBlock* block)
......@@ -50,10 +50,11 @@ void MarkedBlock::destroy(MarkedBlock* block)
block->m_allocation.deallocate();
}
MarkedBlock::MarkedBlock(PageAllocationAligned& allocation, Heap* heap, size_t cellSize)
MarkedBlock::MarkedBlock(PageAllocationAligned& allocation, Heap* heap, size_t cellSize, bool cellsNeedDestruction)
: HeapBlock(allocation)
, m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
, m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
, m_cellsNeedDestruction(cellsNeedDestruction)
, m_state(New) // All cells start out unmarked.
, m_heap(heap)
{
......@@ -70,16 +71,17 @@ inline void MarkedBlock::callDestructor(JSCell* cell)
#if ENABLE(SIMPLE_HEAP_PROFILING)
m_heap->m_destroyedTypeCounts.countVPtr(vptr);
#endif
if (cell->classInfo() != &JSFinalObject::s_info)
cell->methodTable()->destroy(cell);
ASSERT(cell->classInfo() != &JSFinalObject::s_info);
cell->methodTable()->destroy(cell);
cell->zap();
}
template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode>
template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode, bool destructorCallNeeded>
MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
{
ASSERT(blockState != Allocated && blockState != FreeListed);
ASSERT(destructorCallNeeded || sweepMode != SweepOnly);
// This produces a free list that is ordered in reverse through the block.
// This is fine, since the allocation code makes no assumptions about the
......@@ -93,7 +95,7 @@ MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
if (blockState == Zapped && !cell->isZapped())
continue;
if (blockState != New)
if (destructorCallNeeded && blockState != New)
callDestructor(cell);
if (sweepMode == SweepToFreeList) {
......@@ -111,10 +113,21 @@ MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
if (sweepMode == SweepOnly && !m_cellsNeedDestruction)
return 0;
if (m_cellsNeedDestruction)
return sweepHelper<true>(sweepMode);
return sweepHelper<false>(sweepMode);
}
template<bool destructorCallNeeded>
MarkedBlock::FreeCell* MarkedBlock::sweepHelper(SweepMode sweepMode)
{
switch (m_state) {
case New:
ASSERT(sweepMode == SweepToFreeList);
return specializedSweep<New, SweepToFreeList>();
return specializedSweep<New, SweepToFreeList, destructorCallNeeded>();
case FreeListed:
// Happens when a block transitions to fully allocated.
ASSERT(sweepMode == SweepToFreeList);
......@@ -124,12 +137,12 @@ MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
return 0;
case Marked:
return sweepMode == SweepToFreeList
? specializedSweep<Marked, SweepToFreeList>()
: specializedSweep<Marked, SweepOnly>();
? specializedSweep<Marked, SweepToFreeList, destructorCallNeeded>()
: specializedSweep<Marked, SweepOnly, destructorCallNeeded>();
case Zapped:
return sweepMode == SweepToFreeList
? specializedSweep<Zapped, SweepToFreeList>()
: specializedSweep<Zapped, SweepOnly>();
? specializedSweep<Zapped, SweepToFreeList, destructorCallNeeded>()
: specializedSweep<Zapped, SweepOnly, destructorCallNeeded>();
}
ASSERT_NOT_REACHED();
......
......@@ -89,8 +89,8 @@ namespace JSC {
void returnValue() { }
};
static MarkedBlock* create(Heap*, size_t cellSize);
static MarkedBlock* recycle(MarkedBlock*, Heap*, size_t cellSize);
static MarkedBlock* create(Heap*, size_t cellSize, bool cellsNeedDestruction);
static MarkedBlock* recycle(MarkedBlock*, Heap*, size_t cellSize, bool cellsNeedDestruction);
static void destroy(MarkedBlock*);
static bool isAtomAligned(const void*);
......@@ -115,6 +115,7 @@ namespace JSC {
bool markCountIsZero(); // Faster than markCount().
size_t cellSize();
bool cellsNeedDestruction();
size_t size();
size_t capacity();
......@@ -159,14 +160,15 @@ namespace JSC {
static const size_t atomAlignmentMask = atomSize - 1; // atomSize must be a power of two.
enum BlockState { New, FreeListed, Allocated, Marked, Zapped };
template<bool destructorCallNeeded> FreeCell* sweepHelper(SweepMode = SweepOnly);
typedef char Atom[atomSize];
MarkedBlock(PageAllocationAligned&, Heap*, size_t cellSize);
MarkedBlock(PageAllocationAligned&, Heap*, size_t cellSize, bool cellsNeedDestruction);
Atom* atoms();
size_t atomNumber(const void*);
void callDestructor(JSCell*);
template<BlockState, SweepMode> FreeCell* specializedSweep();
template<BlockState, SweepMode, bool destructorCallNeeded> FreeCell* specializedSweep();
#if ENABLE(GGC)
CardSet<bytesPerCard, blockSize> m_cards;
......@@ -179,6 +181,7 @@ namespace JSC {
#else
WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic> m_marks;
#endif
bool m_cellsNeedDestruction;
BlockState m_state;
Heap* m_heap;
};
......@@ -243,6 +246,11 @@ namespace JSC {
return m_atomsPerCell * atomSize;
}
inline bool MarkedBlock::cellsNeedDestruction()
{
return m_cellsNeedDestruction;
}
inline size_t MarkedBlock::size()
{
return markCount() * cellSize();
......
......@@ -36,15 +36,13 @@ MarkedSpace::MarkedSpace(Heap* heap)
, m_heap(heap)
{
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).setCellSize(cellSize);
allocatorFor(cellSize).setHeap(heap);
allocatorFor(cellSize).setMarkedSpace(this);
allocatorFor(cellSize).init(heap, this, cellSize, false);
destructorAllocatorFor(cellSize).init(heap, this, cellSize, true);
}
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
allocatorFor(cellSize).setCellSize(cellSize);
allocatorFor(cellSize).setHeap(heap);
allocatorFor(cellSize).setMarkedSpace(this);
allocatorFor(cellSize).init(heap, this, cellSize, false);
destructorAllocatorFor(cellSize).init(heap, this, cellSize, true);
}
}
......@@ -53,20 +51,28 @@ void MarkedSpace::resetAllocators()
m_waterMark = 0;
m_nurseryWaterMark = 0;
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep)
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).reset();
destructorAllocatorFor(cellSize).reset();
}
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep)
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
allocatorFor(cellSize).reset();
destructorAllocatorFor(cellSize).reset();
}
}
void MarkedSpace::canonicalizeCellLivenessData()
{
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep)
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).zapFreeList();
destructorAllocatorFor(cellSize).zapFreeList();
}
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep)
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
allocatorFor(cellSize).zapFreeList();
destructorAllocatorFor(cellSize).zapFreeList();
}
}
......@@ -107,7 +113,7 @@ inline void TakeIfUnmarked::operator()(MarkedBlock* block)
if (!block->markCountIsZero())
return;
m_markedSpace->allocatorFor(block->cellSize()).removeBlock(block);
m_markedSpace->allocatorFor(block).removeBlock(block);
m_empties.append(block);
}
......
......@@ -52,7 +52,10 @@ public:
MarkedSpace(Heap*);
MarkedAllocator& allocatorFor(size_t);
void* allocate(size_t);
MarkedAllocator& allocatorFor(MarkedBlock*);
MarkedAllocator& destructorAllocatorFor(size_t);
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
void resetAllocators();
......@@ -86,8 +89,14 @@ private:
static const size_t impreciseCutoff = maxCellSize;
static const size_t impreciseCount = impreciseCutoff / impreciseStep;
FixedArray<MarkedAllocator, preciseCount> m_preciseSizeClasses;
FixedArray<MarkedAllocator, impreciseCount> m_impreciseSizeClasses;
struct Subspace {
FixedArray<MarkedAllocator, preciseCount> preciseAllocators;
FixedArray<MarkedAllocator, impreciseCount> impreciseAllocators;
};
Subspace m_destructorSpace;
Subspace m_normalSpace;
size_t m_waterMark;
size_t m_nurseryWaterMark;
Heap* m_heap;
......@@ -124,23 +133,45 @@ inline MarkedAllocator& MarkedSpace::allocatorFor(size_t bytes)
{
ASSERT(bytes && bytes <= maxCellSize);
if (bytes <= preciseCutoff)
return m_preciseSizeClasses[(bytes - 1) / preciseStep];
return m_impreciseSizeClasses[(bytes - 1) / impreciseStep];
return m_normalSpace.preciseAllocators[(bytes - 1) / preciseStep];
return m_normalSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
}
inline void* MarkedSpace::allocate(size_t bytes)
inline MarkedAllocator& MarkedSpace::allocatorFor(MarkedBlock* block)
{
if (block->cellsNeedDestruction())
return destructorAllocatorFor(block->cellSize());
return allocatorFor(block->cellSize());
}
inline MarkedAllocator& MarkedSpace::destructorAllocatorFor(size_t bytes)
{
ASSERT(bytes && bytes <= maxCellSize);
if (bytes <= preciseCutoff)
return m_destructorSpace.preciseAllocators[(bytes - 1) / preciseStep];
return m_destructorSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
}
inline void* MarkedSpace::allocateWithoutDestructor(size_t bytes)
{
return allocatorFor(bytes).allocate();
}
inline void* MarkedSpace::allocateWithDestructor(size_t bytes)
{
return destructorAllocatorFor(bytes).allocate();
}
template <typename Functor> inline typename Functor::ReturnType MarkedSpace::forEachBlock(Functor& functor)
{
for (size_t i = 0; i < preciseCount; ++i) {
m_preciseSizeClasses[i].forEachBlock(functor);
m_normalSpace.preciseAllocators[i].forEachBlock(functor);
m_destructorSpace.preciseAllocators[i].forEachBlock(functor);
}
for (size_t i = 0; i < impreciseCount; ++i) {
m_impreciseSizeClasses[i].forEachBlock(functor);
m_normalSpace.impreciseAllocators[i].forEachBlock(functor);
m_destructorSpace.impreciseAllocators[i].forEachBlock(functor);
}
return functor.returnValue();
......
......@@ -335,7 +335,7 @@ namespace JSC {
void emitWriteBarrier(RegisterID owner, RegisterID valueTag, RegisterID scratch, RegisterID scratch2, WriteBarrierMode, WriteBarrierUseKind);
void emitWriteBarrier(JSCell* owner, RegisterID value, RegisterID scratch, WriteBarrierMode, WriteBarrierUseKind);
template<typename ClassType, typename StructureType> void emitAllocateBasicJSObject(StructureType, RegisterID result, RegisterID storagePtr);
template<typename ClassType, bool destructor, typename StructureType> void emitAllocateBasicJSObject(StructureType, RegisterID result, RegisterID storagePtr);
template<typename T> void emitAllocateJSFinalObject(T structure, RegisterID result, RegisterID storagePtr);
void emitAllocateJSFunction(FunctionExecutable*, RegisterID scopeChain, RegisterID result, RegisterID storagePtr);
......
......@@ -402,9 +402,13 @@ ALWAYS_INLINE bool JIT::isOperandConstantImmediateChar(unsigned src)
return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isString() && asString(getConstantOperand(src).asCell())->length() == 1;
}
template <typename ClassType, typename StructureType> inline void JIT::emitAllocateBasicJSObject(StructureType structure, RegisterID result, RegisterID storagePtr)
template <typename ClassType, bool destructor, typename StructureType> inline void JIT::emitAllocateBasicJSObject(StructureType structure, RegisterID result, RegisterID storagePtr)
{
MarkedAllocator* allocator = &m_globalData->heap.allocatorForObject(sizeof(ClassType));
MarkedAllocator* allocator = 0;
if (destructor)
allocator = &m_globalData->heap.allocatorForObjectWithDestructor(sizeof(ClassType));
else
allocator = &m_globalData->heap.allocatorForObjectWithoutDestructor(sizeof(ClassType));
loadPtr(&allocator->m_firstFreeCell, result);
addSlowCase(branchTestPtr(Zero, result));
......@@ -428,12 +432,12 @@ template <typename ClassType, typename StructureType> inline void JIT::emitAlloc
template <typename T> inline void JIT::emitAllocateJSFinalObject(T structure, RegisterID result, RegisterID scratch)
{
emitAllocateBasicJSObject<JSFinalObject>(structure, result, scratch);
emitAllocateBasicJSObject<JSFinalObject, false, T>(structure, result, scratch);
}
inline void JIT::emitAllocateJSFunction(FunctionExecutable* executable, RegisterID scopeChain, RegisterID result, RegisterID storagePtr)
{
emitAllocateBasicJSObject<JSFunction>(TrustedImmPtr(m_codeBlock->globalObject()->namedFunctionStructure()), result, storagePtr);
emitAllocateBasicJSObject<JSFunction, true>(TrustedImmPtr(m_codeBlock->globalObject()->namedFunctionStructure()), result, storagePtr);
// store the function's scope chain
storePtr(scopeChain, Address(result, JSFunction::offsetOfScopeChain()));
......
......@@ -42,6 +42,7 @@ using namespace WTF;
namespace JSC {
ASSERT_CLASS_FITS_IN_CELL(JSArray);
ASSERT_HAS_TRIVIAL_DESTRUCTOR(JSArray);
// Overview of JSArray
//
......
......@@ -135,23 +135,14 @@ namespace JSC {
static void finalize(JSCell*);
static JSArray* create(JSGlobalData& globalData, Structure* structure, unsigned initialLength = 0)
{
JSArray* array = new (NotNull, allocateCell<JSArray>(globalData.heap)) JSArray(globalData, structure);
array->finishCreation(globalData, initialLength);
return array;
}
static JSArray* create(JSGlobalData&, Structure*, unsigned initialLength = 0);
// tryCreateUninitialized is used for fast construction of arrays whose size and
// contents are known at time of creation. Clients of this interface must:
// - null-check the result (indicating out of memory, or otherwise unable to allocate vector).
// - call 'initializeIndex' for all properties in sequence, for 0 <= i < initialLength.
// - called 'completeInitialization' after all properties have been initialized.
static JSArray* tryCreateUninitialized(JSGlobalData& globalData, Structure* structure, unsigned initialLength)
{
JSArray* array = new (NotNull, allocateCell<JSArray>(globalData.heap)) JSArray(globalData, structure);
return array->tryFinishCreationUninitialized(globalData, initialLength);
}
static JSArray* tryCreateUninitialized(JSGlobalData&, Structure*, unsigned initialLength);
JS_EXPORT_PRIVATE static bool defineOwnProperty(JSObject*, ExecState*, const Identifier&, PropertyDescriptor&, bool throwException);
......@@ -299,6 +290,19 @@ namespace JSC {
void* m_subclassData; // A JSArray subclass can use this to fill the vector lazily.
};
inline JSArray* JSArray::create(JSGlobalData& globalData, Structure* structure, unsigned initialLength)
{
JSArray* array = new (NotNull, allocateCell<JSArray>(globalData.heap)) JSArray(globalData, structure);
array->finishCreation(globalData, initialLength);
return array;
}
inline JSArray* JSArray::tryCreateUninitialized(JSGlobalData& globalData, Structure* structure, unsigned initialLength)
{
JSArray* array = new (NotNull, allocateCell<JSArray>(globalData.heap)) JSArray(globalData, structure);
return array->tryFinishCreationUninitialized(globalData, initialLength);
}
JSArray* asArray(JSValue);
inline JSArray* asArray(JSCell* cell)
......
......@@ -61,6 +61,7 @@ namespace JSC {
class JSCell {
friend class JSValue;
friend class MarkedBlock;
template<typename T> friend void* allocateCell(Heap&);
public:
enum CreatingEarlyCellTag { CreatingEarlyCell };
......@@ -307,14 +308,34 @@ namespace JSC {
return isCell() ? asCell()->toObject(exec, globalObject) : toObjectSlowCase(exec, globalObject);
}
template <typename T> void* allocateCell(Heap& heap)
#if COMPILER(CLANG)
template<class T>
struct NeedsDestructor {
static const bool value = !__has_trivial_destructor(T);
};
#else
// Write manual specializations for this struct template if you care about non-clang compilers.
template<class T>
struct NeedsDestructor {
static const bool value = true;
};
#endif
template<typename T>
void* allocateCell(Heap& heap)
{
#if ENABLE(GC_VALIDATION)
ASSERT(sizeof(T) == T::s_info.cellSize);
ASSERT(!heap.globalData()->isInitializingObject());
heap.globalData()->setInitializingObject(true);
#endif
JSCell* result = static_cast<JSCell*>(heap.allocate(sizeof(T)));
JSCell* result = 0;
if (NeedsDestructor<T>::value)
result = static_cast<JSCell*>(heap.allocateWithDestructor(sizeof(T)));
else {
ASSERT(T::s_info.methodTable.destroy == JSCell::destroy);
result = static_cast<JSCell*>(heap.allocateWithoutDestructor(sizeof(T)));
}
result->clearStructure();
return result;
}
......
......@@ -50,6 +50,7 @@ EncodedJSValue JSC_HOST_CALL callHostFunctionAsConstructor(ExecState* exec)
}
ASSERT_CLASS_FITS_IN_CELL(JSFunction);
ASSERT_HAS_TRIVIAL_DESTRUCTOR(JSFunction);
const ClassInfo JSFunction::s_info = { "Function", &Base::s_info, 0, 0, CREATE_METHOD_TABLE(JSFunction) };
......@@ -108,13 +109,6 @@ void JSFunction::finishCreation(ExecState* exec, FunctionExecutable* executable,