Commit 8b5cfd3b authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

We're collecting pathologically due to small allocations

https://bugs.webkit.org/show_bug.cgi?id=84404

Reviewed by Geoffrey Garen.

No change in performance on run-jsc-benchmarks.

* dfg/DFGSpeculativeJIT.h: Replacing m_firstFreeCell with m_freeList.
(JSC::DFG::SpeculativeJIT::emitAllocateBasicJSObject):
* heap/CopiedSpace.cpp: Getting rid of any water mark related stuff, since it's no 
longer useful. 
(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::tryAllocateSlowCase): We now only call didAllocate here rather than 
carrying out a somewhat complicated accounting job for our old water mark throughout CopiedSpace.
(JSC::CopiedSpace::tryAllocateOversize):  Call the new didAllocate to notify the Heap of 
newly allocated stuff.
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::destroy):
* heap/CopiedSpace.h:
(CopiedSpace):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::startedCopying):
* heap/Heap.cpp: Removed water mark related stuff, replaced with new bytesAllocated and 
bytesAllocatedLimit to track how much memory has been allocated since the last collection.
(JSC::Heap::Heap):
(JSC::Heap::reportExtraMemoryCostSlowCase):
(JSC::Heap::collect): We now set the new limit of bytes that we can allocate before triggering 
a collection to be the size of the Heap after the previous collection. Thus, we still have our 
2x allocation amount.
(JSC::Heap::didAllocate): Notifies the GC activity timer of how many bytes have been allocated 
thus far and then adds the new number of bytes to the current total.
(JSC):
* heap/Heap.h: Removed water mark related stuff.
(JSC::Heap::notifyIsSafeToCollect):
(Heap):
(JSC::Heap::shouldCollect):
(JSC):
* heap/MarkedAllocator.cpp: 
(JSC::MarkedAllocator::tryAllocateHelper): Refactored to use MarkedBlock's new FreeList struct.
(JSC::MarkedAllocator::allocateSlowCase):
(JSC::MarkedAllocator::addBlock):
* heap/MarkedAllocator.h: 
(MarkedAllocator):
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::allocate): 
(JSC::MarkedAllocator::zapFreeList): Refactored to take in a FreeList instead of a FreeCell.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep):
(JSC::MarkedBlock::sweepHelper):
(JSC::MarkedBlock::zapFreeList):
* heap/MarkedBlock.h:
(FreeList): Added a new struct that keeps track of the current MarkedAllocator's
free list including the number of bytes of stuff in the free list so that when the free list is 
exhausted, the correct amount can be reported to Heap.
(MarkedBlock):
(JSC::MarkedBlock::FreeList::FreeList):
(JSC):
* heap/MarkedSpace.cpp: Removing all water mark related stuff.
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
* heap/MarkedSpace.h:
(MarkedSpace):
(JSC):
* heap/WeakSet.cpp:
(JSC::WeakSet::findAllocator): Refactored to use the didAllocate interface with the Heap. This 
function still needs work though now that the Heap knows how many bytes have been allocated 
since the last collection.
* jit/JITInlineMethods.h: Refactored to use MarkedBlock's new FreeList struct.
(JSC::JIT::emitAllocateBasicJSObject): Ditto.
* llint/LowLevelInterpreter.asm: Ditto.
* runtime/GCActivityCallback.cpp: 
(JSC::DefaultGCActivityCallback::didAllocate): 
* runtime/GCActivityCallback.h:
(JSC::GCActivityCallback::didAllocate): Renamed willAllocate to didAllocate to indicate that 
the allocation that is being reported has already taken place.
(DefaultGCActivityCallback):
* runtime/GCActivityCallbackCF.cpp:
(JSC):
(JSC::DefaultGCActivityCallback::didAllocate): Refactored to return early if the amount of 
allocation since the last collection is not above a threshold (initially arbitrarily chosen to 
be 128KB). 


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@114698 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 7dd63568
2012-04-19 Mark Hahnenberg <mhahnenberg@apple.com>
We're collecting pathologically due to small allocations
https://bugs.webkit.org/show_bug.cgi?id=84404
Reviewed by Geoffrey Garen.
No change in performance on run-jsc-benchmarks.
* dfg/DFGSpeculativeJIT.h: Replacing m_firstFreeCell with m_freeList.
(JSC::DFG::SpeculativeJIT::emitAllocateBasicJSObject):
* heap/CopiedSpace.cpp: Getting rid of any water mark related stuff, since it's no
longer useful.
(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::tryAllocateSlowCase): We now only call didAllocate here rather than
carrying out a somewhat complicated accounting job for our old water mark throughout CopiedSpace.
(JSC::CopiedSpace::tryAllocateOversize): Call the new didAllocate to notify the Heap of
newly allocated stuff.
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneFillingBlock):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::destroy):
* heap/CopiedSpace.h:
(CopiedSpace):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::startedCopying):
* heap/Heap.cpp: Removed water mark related stuff, replaced with new bytesAllocated and
bytesAllocatedLimit to track how much memory has been allocated since the last collection.
(JSC::Heap::Heap):
(JSC::Heap::reportExtraMemoryCostSlowCase):
(JSC::Heap::collect): We now set the new limit of bytes that we can allocate before triggering
a collection to be the size of the Heap after the previous collection. Thus, we still have our
2x allocation amount.
(JSC::Heap::didAllocate): Notifies the GC activity timer of how many bytes have been allocated
thus far and then adds the new number of bytes to the current total.
(JSC):
* heap/Heap.h: Removed water mark related stuff.
(JSC::Heap::notifyIsSafeToCollect):
(Heap):
(JSC::Heap::shouldCollect):
(JSC):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): Refactored to use MarkedBlock's new FreeList struct.
(JSC::MarkedAllocator::allocateSlowCase):
(JSC::MarkedAllocator::addBlock):
* heap/MarkedAllocator.h:
(MarkedAllocator):
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::allocate):
(JSC::MarkedAllocator::zapFreeList): Refactored to take in a FreeList instead of a FreeCell.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep):
(JSC::MarkedBlock::sweepHelper):
(JSC::MarkedBlock::zapFreeList):
* heap/MarkedBlock.h:
(FreeList): Added a new struct that keeps track of the current MarkedAllocator's
free list including the number of bytes of stuff in the free list so that when the free list is
exhausted, the correct amount can be reported to Heap.
(MarkedBlock):
(JSC::MarkedBlock::FreeList::FreeList):
(JSC):
* heap/MarkedSpace.cpp: Removing all water mark related stuff.
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
* heap/MarkedSpace.h:
(MarkedSpace):
(JSC):
* heap/WeakSet.cpp:
(JSC::WeakSet::findAllocator): Refactored to use the didAllocate interface with the Heap. This
function still needs work though now that the Heap knows how many bytes have been allocated
since the last collection.
* jit/JITInlineMethods.h: Refactored to use MarkedBlock's new FreeList struct.
(JSC::JIT::emitAllocateBasicJSObject): Ditto.
* llint/LowLevelInterpreter.asm: Ditto.
* runtime/GCActivityCallback.cpp:
(JSC::DefaultGCActivityCallback::didAllocate):
* runtime/GCActivityCallback.h:
(JSC::GCActivityCallback::didAllocate): Renamed willAllocate to didAllocate to indicate that
the allocation that is being reported has already taken place.
(DefaultGCActivityCallback):
* runtime/GCActivityCallbackCF.cpp:
(JSC):
(JSC::DefaultGCActivityCallback::didAllocate): Refactored to return early if the amount of
allocation since the last collection is not above a threshold (initially arbitrarily chosen to
be 128KB).
2012-04-19 Filip Pizlo <fpizlo@apple.com>
MacroAssemblerARMv7::branchTruncateDoubleToUint32 should obey the overflow signal
......@@ -1794,7 +1794,7 @@ private:
else
allocator = &m_jit.globalData()->heap.allocatorForObjectWithoutDestructor(sizeof(ClassType));
m_jit.loadPtr(&allocator->m_firstFreeCell, resultGPR);
m_jit.loadPtr(&allocator->m_freeList.head, resultGPR);
slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, resultGPR));
// The object is half-allocated: we have what we know is a fresh object, but
......@@ -1806,7 +1806,7 @@ private:
// Now that we have scratchGPR back, remove the object from the free list
m_jit.loadPtr(MacroAssembler::Address(resultGPR), scratchGPR);
m_jit.storePtr(scratchGPR, &allocator->m_firstFreeCell);
m_jit.storePtr(scratchGPR, &allocator->m_freeList.head);
// Initialize the object's classInfo pointer
m_jit.storePtr(MacroAssembler::TrustedImmPtr(&ClassType::s_info), MacroAssembler::Address(resultGPR, JSCell::classInfoOffset()));
......
......@@ -37,7 +37,6 @@ CopiedSpace::CopiedSpace(Heap* heap)
, m_fromSpace(0)
, m_inCopyingPhase(false)
, m_numberOfLoanedBlocks(0)
, m_waterMark(0)
{
}
......@@ -52,12 +51,10 @@ void CopiedSpace::init()
CheckedBoolean CopiedSpace::tryAllocateSlowCase(size_t bytes, void** outPtr)
{
m_heap->activityCallback()->willAllocate();
if (isOversize(bytes))
return tryAllocateOversize(bytes, outPtr);
m_waterMark += m_allocator.currentCapacity();
m_heap->didAllocate(m_allocator.currentCapacity());
if (!addNewBlock()) {
*outPtr = 0;
......@@ -86,7 +83,7 @@ CheckedBoolean CopiedSpace::tryAllocateOversize(size_t bytes, void** outPtr)
*outPtr = allocateFromBlock(block, bytes);
m_waterMark += block->capacity();
m_heap->didAllocate(blockSize);
return true;
}
......@@ -138,7 +135,6 @@ CheckedBoolean CopiedSpace::tryReallocateOversize(void** ptr, size_t oldSize, si
if (isOversize(oldSize)) {
CopiedBlock* oldBlock = oversizeBlockFor(oldPtr);
m_oversizeBlocks.remove(oldBlock);
m_waterMark -= oldBlock->capacity();
oldBlock->m_allocation.deallocate();
}
......@@ -164,11 +160,6 @@ void CopiedSpace::doneFillingBlock(CopiedBlock* block)
m_toSpaceFilter.add(reinterpret_cast<Bits>(block));
}
{
MutexLocker locker(m_memoryStatsLock);
m_waterMark += block->capacity();
}
{
MutexLocker locker(m_loanedBlocksLock);
ASSERT(m_numberOfLoanedBlocks > 0);
......@@ -193,7 +184,6 @@ void CopiedSpace::doneCopying()
if (block->m_isPinned) {
block->m_isPinned = false;
m_toSpace->push(block);
m_waterMark += block->capacity();
continue;
}
......@@ -211,10 +201,8 @@ void CopiedSpace::doneCopying()
if (!curr->m_isPinned) {
m_oversizeBlocks.remove(curr);
curr->m_allocation.deallocate();
} else {
} else
curr->m_isPinned = false;
m_waterMark += curr->capacity();
}
curr = next;
}
......@@ -281,8 +269,6 @@ void CopiedSpace::destroy()
CopiedBlock* block = static_cast<CopiedBlock*>(m_oversizeBlocks.removeHead());
block->m_allocation.deallocate();
}
m_waterMark = 0;
}
size_t CopiedSpace::size()
......
......@@ -66,7 +66,6 @@ public:
bool contains(void*, CopiedBlock*&);
size_t waterMark() { return m_waterMark; }
size_t size();
size_t capacity();
......@@ -92,8 +91,6 @@ private:
static bool fitsInBlock(CopiedBlock*, size_t);
static CopiedBlock* oversizeBlockFor(void* ptr);
size_t calculateWaterMark();
Heap* m_heap;
CopiedAllocator m_allocator;
......@@ -117,9 +114,6 @@ private:
ThreadCondition m_loanedBlocksCondition;
size_t m_numberOfLoanedBlocks;
Mutex m_memoryStatsLock;
size_t m_waterMark;
static const size_t s_maxAllocationSize = 32 * KB;
static const size_t s_initialBlockNum = 16;
static const size_t s_blockMask = ~(HeapBlock::s_blockSize - 1);
......
......@@ -56,8 +56,6 @@ inline void CopiedSpace::startedCopying()
m_toSpaceFilter.reset();
m_allocator.startedCopying();
m_waterMark = 0;
ASSERT(!m_inCopyingPhase);
ASSERT(!m_numberOfLoanedBlocks);
m_inCopyingPhase = true;
......
......@@ -313,7 +313,8 @@ Heap::Heap(JSGlobalData* globalData, HeapSize heapSize)
: m_heapSize(heapSize)
, m_minBytesPerCycle(heapSizeForHint(heapSize))
, m_lastFullGCSize(0)
, m_highWaterMark(m_minBytesPerCycle)
, m_bytesAllocatedLimit(m_minBytesPerCycle)
, m_bytesAllocated(0)
, m_operationInProgress(NoOperation)
, m_objectSpace(this)
, m_storageSpace(this)
......@@ -465,7 +466,9 @@ void Heap::reportExtraMemoryCostSlowCase(size_t cost)
// if a large value survives one garbage collection, there is not much point to
// collecting more frequently as long as it stays alive.
addToWaterMark(cost);
didAllocate(cost);
if (shouldCollect())
collect(DoNotSweep);
}
void Heap::protect(JSValue k)
......@@ -848,16 +851,17 @@ void Heap::collect(SweepToggle sweepToggle)
shrink();
}
// To avoid pathological GC churn in large heaps, we set the allocation high
// water mark to be proportional to the current size of the heap. The exact
// proportion is a bit arbitrary. A 2X multiplier gives a 1:1 (heap size :
// To avoid pathological GC churn in large heaps, we set the new allocation
// limit to be the current size of the heap. This heuristic
// is a bit arbitrary. Using the current size of the heap after this
// collection gives us a 2X multiplier, which is a 1:1 (heap size :
// new bytes allocated) proportion, and seems to work well in benchmarks.
size_t newSize = size();
size_t proportionalBytes = 2 * newSize;
if (fullGC) {
m_lastFullGCSize = newSize;
m_highWaterMark = max(proportionalBytes, m_minBytesPerCycle);
m_bytesAllocatedLimit = max(newSize, m_minBytesPerCycle);
}
m_bytesAllocated = 0;
double lastGCEndTime = WTF::currentTime();
m_lastGCLength = lastGCEndTime - lastGCStartTime;
JAVASCRIPTCORE_GC_END();
......@@ -886,6 +890,12 @@ GCActivityCallback* Heap::activityCallback()
return m_activityCallback.get();
}
void Heap::didAllocate(size_t bytes)
{
m_activityCallback->didAllocate(m_bytesAllocated);
m_bytesAllocated += bytes;
}
bool Heap::isValidAllocation(size_t bytes)
{
if (!isValidThreadState(m_globalData))
......
......@@ -112,8 +112,11 @@ namespace JSC {
void removeFunctionExecutable(FunctionExecutable*);
void notifyIsSafeToCollect() { m_isSafeToCollect = true; }
JS_EXPORT_PRIVATE void collectAllGarbage();
JS_EXPORT_PRIVATE void collectAllGarbage();
enum SweepToggle { DoNotSweep, DoSweep };
bool shouldCollect();
void collect(SweepToggle);
void reportExtraMemoryCost(size_t cost);
JS_EXPORT_PRIVATE void protect(JSValue);
......@@ -144,12 +147,12 @@ namespace JSC {
void getConservativeRegisterRoots(HashSet<JSCell*>& roots);
void addToWaterMark(size_t);
double lastGCLength() { return m_lastGCLength; }
JS_EXPORT_PRIVATE void discardAllCompiledCode();
void didAllocate(size_t);
private:
friend class CodeBlock;
friend class LLIntOffsetsExtractor;
......@@ -163,10 +166,6 @@ namespace JSC {
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
size_t waterMark();
size_t highWaterMark();
bool shouldCollect();
static const size_t minExtraCost = 256;
static const size_t maxExtraCost = 1024 * 1024;
......@@ -192,8 +191,6 @@ namespace JSC {
void harvestWeakReferences();
void finalizeUnconditionalFinalizers();
enum SweepToggle { DoNotSweep, DoSweep };
void collect(SweepToggle);
void shrink();
void releaseFreeBlocks();
void sweep();
......@@ -208,7 +205,9 @@ namespace JSC {
const HeapSize m_heapSize;
const size_t m_minBytesPerCycle;
size_t m_lastFullGCSize;
size_t m_highWaterMark;
size_t m_bytesAllocatedLimit;
size_t m_bytesAllocated;
OperationInProgress m_operationInProgress;
MarkedSpace m_objectSpace;
......@@ -257,7 +256,7 @@ namespace JSC {
#if ENABLE(GGC)
return m_objectSpace.nurseryWaterMark() >= m_minBytesPerCycle && m_isSafeToCollect;
#else
return waterMark() >= highWaterMark() && m_isSafeToCollect;
return m_bytesAllocated > m_bytesAllocatedLimit && m_isSafeToCollect;
#endif
}
......@@ -293,23 +292,6 @@ namespace JSC {
MarkedBlock::blockFor(cell)->setMarked(cell);
}
inline size_t Heap::waterMark()
{
return m_objectSpace.waterMark() + m_storageSpace.waterMark();
}
inline size_t Heap::highWaterMark()
{
return m_highWaterMark;
}
inline void Heap::addToWaterMark(size_t size)
{
m_objectSpace.addToWaterMark(size);
if (waterMark() > highWaterMark())
collect(DoNotSweep);
}
#if ENABLE(GGC)
inline uint8_t* Heap::addressOfCardFor(JSCell* cell)
{
......
......@@ -8,23 +8,22 @@ namespace JSC {
inline void* MarkedAllocator::tryAllocateHelper()
{
MarkedBlock::FreeCell* firstFreeCell = m_firstFreeCell;
if (!firstFreeCell) {
if (!m_freeList.head) {
for (MarkedBlock*& block = m_currentBlock; block; block = static_cast<MarkedBlock*>(block->next())) {
firstFreeCell = block->sweep(MarkedBlock::SweepToFreeList);
if (firstFreeCell)
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (m_freeList.head)
break;
m_markedSpace->didConsumeFreeList(block);
block->didConsumeFreeList();
}
if (!firstFreeCell)
if (!m_freeList.head)
return 0;
}
ASSERT(firstFreeCell);
m_firstFreeCell = firstFreeCell->next;
return firstFreeCell;
MarkedBlock::FreeCell* head = m_freeList.head;
m_freeList.head = head->next;
ASSERT(head);
return head;
}
inline void* MarkedAllocator::tryAllocate()
......@@ -42,7 +41,8 @@ void* MarkedAllocator::allocateSlowCase()
ASSERT(m_heap->m_operationInProgress == NoOperation);
#endif
m_heap->activityCallback()->willAllocate();
ASSERT(!m_freeList.head);
m_heap->didAllocate(m_freeList.bytes);
void* result = tryAllocate();
......@@ -71,7 +71,7 @@ void* MarkedAllocator::allocateSlowCase()
if (result)
return result;
ASSERT(m_heap->waterMark() < m_heap->highWaterMark());
ASSERT(!m_heap->shouldCollect());
addBlock(allocateBlock(AllocationMustSucceed));
......@@ -108,11 +108,11 @@ MarkedBlock* MarkedAllocator::allocateBlock(AllocationEffort allocationEffort)
void MarkedAllocator::addBlock(MarkedBlock* block)
{
ASSERT(!m_currentBlock);
ASSERT(!m_firstFreeCell);
ASSERT(!m_freeList.head);
m_blockList.append(block);
m_currentBlock = block;
m_firstFreeCell = block->sweep(MarkedBlock::SweepToFreeList);
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
}
void MarkedAllocator::removeBlock(MarkedBlock* block)
......
......@@ -41,7 +41,7 @@ private:
void* tryAllocateHelper();
MarkedBlock* allocateBlock(AllocationEffort);
MarkedBlock::FreeCell* m_firstFreeCell;
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
DoublyLinkedList<HeapBlock> m_blockList;
size_t m_cellSize;
......@@ -51,8 +51,7 @@ private:
};
inline MarkedAllocator::MarkedAllocator()
: m_firstFreeCell(0)
, m_currentBlock(0)
: m_currentBlock(0)
, m_cellSize(0)
, m_cellsNeedDestruction(true)
, m_heap(0)
......@@ -70,13 +69,13 @@ inline void MarkedAllocator::init(Heap* heap, MarkedSpace* markedSpace, size_t c
inline void* MarkedAllocator::allocate()
{
MarkedBlock::FreeCell* firstFreeCell = m_firstFreeCell;
MarkedBlock::FreeCell* head = m_freeList.head;
// This is a light-weight fast path to cover the most common case.
if (UNLIKELY(!firstFreeCell))
if (UNLIKELY(!head))
return allocateSlowCase();
m_firstFreeCell = firstFreeCell->next;
return firstFreeCell;
m_freeList.head = head->next;
return head;
}
inline void MarkedAllocator::reset()
......@@ -87,12 +86,12 @@ inline void MarkedAllocator::reset()
inline void MarkedAllocator::zapFreeList()
{
if (!m_currentBlock) {
ASSERT(!m_firstFreeCell);
ASSERT(!m_freeList.head);
return;
}
m_currentBlock->zapFreeList(m_firstFreeCell);
m_firstFreeCell = 0;
m_currentBlock->zapFreeList(m_freeList);
m_freeList.head = 0;
}
template <typename Functor> inline void MarkedAllocator::forEachBlock(Functor& functor)
......
......@@ -77,7 +77,7 @@ inline void MarkedBlock::callDestructor(JSCell* cell)
}
template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode, bool destructorCallNeeded>
MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
MarkedBlock::FreeList MarkedBlock::specializedSweep()
{
ASSERT(blockState != Allocated && blockState != FreeListed);
ASSERT(destructorCallNeeded || sweepMode != SweepOnly);
......@@ -86,6 +86,7 @@ MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
// This is fine, since the allocation code makes no assumptions about the
// order of the free list.
FreeCell* head = 0;
size_t count = 0;
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
if (blockState == Marked && m_marks.get(i))
continue;
......@@ -101,19 +102,20 @@ MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
FreeCell* freeCell = reinterpret_cast<FreeCell*>(cell);
freeCell->next = head;
head = freeCell;
++count;
}
}
m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Zapped);
return head;
return FreeList(head, count * cellSize());
}
MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
MarkedBlock::FreeList MarkedBlock::sweep(SweepMode sweepMode)
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
if (sweepMode == SweepOnly && !m_cellsNeedDestruction)
return 0;
return FreeList();
if (m_cellsNeedDestruction)
return sweepHelper<true>(sweepMode);
......@@ -121,7 +123,7 @@ MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
}
template<bool destructorCallNeeded>
MarkedBlock::FreeCell* MarkedBlock::sweepHelper(SweepMode sweepMode)
MarkedBlock::FreeList MarkedBlock::sweepHelper(SweepMode sweepMode)
{
switch (m_state) {
case New:
......@@ -130,10 +132,10 @@ MarkedBlock::FreeCell* MarkedBlock::sweepHelper(SweepMode sweepMode)
case FreeListed:
// Happens when a block transitions to fully allocated.
ASSERT(sweepMode == SweepToFreeList);
return 0;
return FreeList();
case Allocated:
ASSERT_NOT_REACHED();
return 0;
return FreeList();
case Marked:
return sweepMode == SweepToFreeList
? specializedSweep<Marked, SweepToFreeList, destructorCallNeeded>()
......@@ -145,12 +147,13 @@ MarkedBlock::FreeCell* MarkedBlock::sweepHelper(SweepMode sweepMode)
}
ASSERT_NOT_REACHED();
return 0;
return FreeList();
}
void MarkedBlock::zapFreeList(FreeCell* firstFreeCell)
void MarkedBlock::zapFreeList(const FreeList& freeList)
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
FreeCell* head = freeList.head;
if (m_state == Marked) {
// If the block is in the Marked state then we know that:
......@@ -159,7 +162,7 @@ void MarkedBlock::zapFreeList(FreeCell* firstFreeCell)
// fact that their mark bits are unset.
// Hence if the block is Marked we need to leave it Marked.
ASSERT(!firstFreeCell);
ASSERT(!head);
return;
}
......@@ -176,7 +179,7 @@ void MarkedBlock::zapFreeList(FreeCell* firstFreeCell)
// dead objects will have 0 in their vtables and live objects will have
// non-zero vtables, which is consistent with the block being zapped.
ASSERT(!firstFreeCell);
ASSERT(!head);
return;
}
......@@ -188,7 +191,7 @@ void MarkedBlock::zapFreeList(FreeCell* firstFreeCell)
// way to tell what's live vs dead. We use zapping for that.
FreeCell* next;
for (FreeCell* current = firstFreeCell; current; current = next) {
for (FreeCell* current = head; current; current = next) {
next = current->next;
reinterpret_cast<JSCell*>(current)->zap();
}
......
......@@ -87,6 +87,14 @@ namespace JSC {
FreeCell* next;
};
struct FreeList {
FreeCell* head;
size_t bytes;
FreeList();
FreeList(FreeCell*, size_t);
};
struct VoidFunctor {
typedef void ReturnType;
void returnValue() { }
......@@ -105,13 +113,13 @@ namespace JSC {
void* allocate();
enum SweepMode { SweepOnly, SweepToFreeList };
FreeCell* sweep(SweepMode = SweepOnly);
FreeList sweep(SweepMode = SweepOnly);
// While allocating from a free list, MarkedBlock temporarily has bogus
// cell liveness data. To restore accurate cell liveness data, call one
// of these functions:
void didConsumeFreeList(); // Call this once you've allocated all the items in the free list.
void zapFreeList(FreeCell* firstFreeCell); // Call this to undo the free list.
void zapFreeList(const FreeList&); // Call this to undo the free list.
void clearMarks();
size_t markCount();
......@@ -163,7 +171,7 @@ namespace JSC {
static const size_t atomAlignmentMask = atomSize - 1; // atomSize must be a power of two.
enum BlockState { New, FreeListed, Allocated, Marked, Zapped };
template<bool destructorCallNeeded> FreeCell* sweepHelper(SweepMode = SweepOnly);
template<bool destructorCallNeeded> FreeList sweepHelper(SweepMode = SweepOnly);
typedef char Atom[atomSize];
......@@ -171,7 +179,7 @@ namespace JSC {
Atom* atoms();
size_t atomNumber(const void*);
void callDestructor(JSCell*);
template<BlockState, SweepMode, bool destructorCallNeeded> FreeCell* specializedSweep();
template<BlockState, SweepMode, bool destructorCallNeeded> FreeList specializedSweep();
#if ENABLE(GGC)
CardSet<bytesPerCard, blockSize> m_cards;
......@@ -189,6 +197,18 @@ namespace JSC {
Heap* m_heap;
};