Commit 196dc9ae authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

Marking should be generational

https://bugs.webkit.org/show_bug.cgi?id=126552

Reviewed by Geoffrey Garen.

Source/JavaScriptCore: 

Re-marking the same objects over and over is a waste of effort. This patch implements 
the sticky mark bit algorithm (along with our already-present write barriers) to reduce 
overhead during garbage collection caused by rescanning objects.

There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their 
generation.

In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in 
https://bugs.webkit.org/show_bug.cgi?id=126555.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):

Source/WTF: 

* wtf/Bitmap.h:
(WTF::WordType>::count): Added a cast that became necessary when Bitmap
is used with smaller types than int32_t.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@161540 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 5cb7d754
2014-01-07 Mark Hahnenberg <mhahnenberg@apple.com>
Marking should be generational
https://bugs.webkit.org/show_bug.cgi?id=126552
Reviewed by Geoffrey Garen.
Re-marking the same objects over and over is a waste of effort. This patch implements
the sticky mark bit algorithm (along with our already-present write barriers) to reduce
overhead during garbage collection caused by rescanning objects.
There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their
generation.
In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in
https://bugs.webkit.org/show_bug.cgi?id=126555.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):
2014-01-08 Sam Weinig <sam@webkit.org>
[JS] Should be able to create a promise by calling the Promise constructor as a function
......
......@@ -1954,15 +1954,15 @@ void CodeBlock::visitAggregate(SlotVisitor& visitor)
if (CodeBlock* otherBlock = specialOSREntryBlockOrNull())
otherBlock->visitAggregate(visitor);
visitor.reportExtraMemoryUsage(sizeof(CodeBlock));
visitor.reportExtraMemoryUsage(ownerExecutable(), sizeof(CodeBlock));
if (m_jitCode)
visitor.reportExtraMemoryUsage(m_jitCode->size());
visitor.reportExtraMemoryUsage(ownerExecutable(), m_jitCode->size());
if (m_instructions.size()) {
// Divide by refCount() because m_instructions points to something that is shared
// by multiple CodeBlocks, and we only want to count it towards the heap size once.
// Having each CodeBlock report only its proportional share of the size is one way
// of accomplishing this.
visitor.reportExtraMemoryUsage(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
visitor.reportExtraMemoryUsage(ownerExecutable(), m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
}
visitor.append(&m_unlinkedCode);
......
......@@ -1269,6 +1269,7 @@ inline void CodeBlockSet::mark(void* candidateCodeBlock)
return;
(*iter)->m_mayBeExecuting = true;
m_currentlyExecuting.append(static_cast<CodeBlock*>(candidateCodeBlock));
}
} // namespace JSC
......
......@@ -850,6 +850,7 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial
NativeCallFrameTracer tracer(&vm, exec);
ASSERT(!object->structure()->outOfLineCapacity());
DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
......@@ -860,6 +861,7 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*
VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec);
DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize);
object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result);
......
......@@ -45,7 +45,8 @@ CodeBlockSet::~CodeBlockSet()
void CodeBlockSet::add(PassRefPtr<CodeBlock> codeBlock)
{
bool isNewEntry = m_set.add(codeBlock.leakRef()).isNewEntry;
CodeBlock* block = codeBlock.leakRef();
bool isNewEntry = m_set.add(block).isNewEntry;
ASSERT_UNUSED(isNewEntry, isNewEntry);
}
......@@ -101,9 +102,16 @@ void CodeBlockSet::traceMarked(SlotVisitor& visitor)
CodeBlock* codeBlock = *iter;
if (!codeBlock->m_mayBeExecuting)
continue;
codeBlock->visitAggregate(visitor);
codeBlock->ownerExecutable()->visitChildren(codeBlock->ownerExecutable(), visitor);
}
}
void CodeBlockSet::rememberCurrentlyExecutingCodeBlocks(Heap* heap)
{
for (size_t i = 0; i < m_currentlyExecuting.size(); ++i)
heap->addToRememberedSet(m_currentlyExecuting[i]->ownerExecutable());
m_currentlyExecuting.clear();
}
} // namespace JSC
......@@ -30,10 +30,12 @@
#include <wtf/Noncopyable.h>
#include <wtf/PassRefPtr.h>
#include <wtf/RefPtr.h>
#include <wtf/Vector.h>
namespace JSC {
class CodeBlock;
class Heap;
class SlotVisitor;
// CodeBlockSet tracks all CodeBlocks. Every CodeBlock starts out with one
......@@ -65,11 +67,16 @@ public:
// mayBeExecuting.
void traceMarked(SlotVisitor&);
// Add all currently executing CodeBlocks to the remembered set to be
// re-scanned during the next collection.
void rememberCurrentlyExecutingCodeBlocks(Heap*);
private:
// This is not a set of RefPtr<CodeBlock> because we need to be able to find
// arbitrary bogus pointers. I could have written a thingy that had peek types
// and all, but that seemed like overkill.
HashSet<CodeBlock* > m_set;
Vector<CodeBlock*> m_currentlyExecuting;
};
} // namespace JSC
......
......@@ -42,6 +42,9 @@ inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigne
#endif
m_liveBytes += bytes;
if (isPinned())
return;
if (!shouldEvacuate()) {
pin();
return;
......
......@@ -316,4 +316,17 @@ bool CopiedSpace::isPagedOut(double deadline)
|| isBlockListPagedOut(deadline, &m_oversizeBlocks);
}
void CopiedSpace::didStartFullCollection()
{
ASSERT(heap()->operationInProgress() == FullCollection);
ASSERT(m_fromSpace->isEmpty());
for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
block->didSurviveGC();
for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
block->didSurviveGC();
}
} // namespace JSC
......@@ -60,6 +60,8 @@ public:
CopiedAllocator& allocator() { return m_allocator; }
void didStartFullCollection();
void startedCopying();
void doneCopying();
bool isInCopyPhase() { return m_inCopyingPhase; }
......@@ -80,6 +82,8 @@ public:
static CopiedBlock* blockFor(void*);
Heap* heap() const { return m_heap; }
private:
static bool isOversize(size_t);
......
This diff is collapsed.
......@@ -94,11 +94,17 @@ namespace JSC {
static bool testAndSetMarked(const void*);
static void setMarked(const void*);
JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*);
bool isInRememberedSet(const JSCell* cell) const
{
ASSERT(cell);
ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
return MarkedBlock::blockFor(cell)->isRemembered(cell);
}
static bool isWriteBarrierEnabled();
static void writeBarrier(const JSCell*);
JS_EXPORT_PRIVATE static void writeBarrier(const JSCell*);
static void writeBarrier(const JSCell*, JSValue);
static void writeBarrier(const JSCell*, JSCell*);
static uint8_t* addressOfCardFor(JSCell*);
WriteBarrierBuffer& writeBarrierBuffer() { return m_writeBarrierBuffer; }
void flushWriteBarrierBuffer(JSCell*);
......@@ -120,6 +126,7 @@ namespace JSC {
// true if collection is in progress
inline bool isCollecting();
inline HeapOperation operationInProgress() { return m_operationInProgress; }
// true if an allocation or collection is in progress
inline bool isBusy();
......@@ -236,6 +243,7 @@ namespace JSC {
void markRoots();
void markProtectedObjects(HeapRootVisitor&);
void markTempSortVectors(HeapRootVisitor&);
template <HeapOperation collectionType>
void copyBackingStores();
void harvestWeakReferences();
void finalizeUnconditionalFinalizers();
......@@ -257,10 +265,11 @@ namespace JSC {
const size_t m_minBytesPerCycle;
size_t m_sizeAfterLastCollect;
size_t m_bytesAllocatedLimit;
size_t m_bytesAllocated;
size_t m_bytesAbandoned;
size_t m_bytesAllocatedThisCycle;
size_t m_bytesAbandonedThisCycle;
size_t m_maxEdenSize;
size_t m_maxHeapSize;
bool m_shouldDoFullCollection;
size_t m_totalBytesVisited;
size_t m_totalBytesCopied;
......@@ -271,6 +280,8 @@ namespace JSC {
GCIncomingRefCountedSet<ArrayBuffer> m_arrayBuffers;
size_t m_extraMemoryUsage;
HashSet<const JSCell*> m_copyingRememberedSet;
ProtectCountSet m_protectedValues;
Vector<Vector<ValueStringPair, 0, UnsafeVectorOverflow>* > m_tempSortingVectors;
OwnPtr<HashSet<MarkedArgumentBuffer*>> m_markListSet;
......@@ -322,8 +333,8 @@ namespace JSC {
if (isDeferred())
return false;
if (Options::gcMaxHeapSize())
return m_bytesAllocated > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
return m_bytesAllocated > m_bytesAllocatedLimit && m_isSafeToCollect && m_operationInProgress == NoOperation;
return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
return m_bytesAllocatedThisCycle > m_maxEdenSize && m_isSafeToCollect && m_operationInProgress == NoOperation;
}
bool Heap::isBusy()
......@@ -333,7 +344,7 @@ namespace JSC {
bool Heap::isCollecting()
{
return m_operationInProgress == Collection;
return m_operationInProgress == FullCollection || m_operationInProgress == EdenCollection;
}
inline Heap* Heap::heap(const JSCell* cell)
......@@ -370,26 +381,33 @@ namespace JSC {
inline bool Heap::isWriteBarrierEnabled()
{
#if ENABLE(WRITE_BARRIER_PROFILING)
#if ENABLE(WRITE_BARRIER_PROFILING) || ENABLE(GGC)
return true;
#else
return false;
#endif
}
inline void Heap::writeBarrier(const JSCell*)
{
WriteBarrierCounters::countWriteBarrier();
}
inline void Heap::writeBarrier(const JSCell*, JSCell*)
inline void Heap::writeBarrier(const JSCell* from, JSCell* to)
{
#if ENABLE(WRITE_BARRIER_PROFILING)
WriteBarrierCounters::countWriteBarrier();
#endif
if (!from || !isMarked(from))
return;
if (!to || isMarked(to))
return;
Heap::heap(from)->addToRememberedSet(from);
}
inline void Heap::writeBarrier(const JSCell*, JSValue)
inline void Heap::writeBarrier(const JSCell* from, JSValue to)
{
#if ENABLE(WRITE_BARRIER_PROFILING)
WriteBarrierCounters::countWriteBarrier();
#endif
if (!to.isCell())
return;
writeBarrier(from, to.asCell());
}
inline void Heap::reportExtraMemoryCost(size_t cost)
......
......@@ -28,7 +28,7 @@
namespace JSC {
enum HeapOperation { NoOperation, Allocation, Collection };
enum HeapOperation { NoOperation, Allocation, FullCollection, EdenCollection };
} // namespace JSC
......
......@@ -57,8 +57,29 @@ MarkStackArray::MarkStackArray(BlockAllocator& blockAllocator)
MarkStackArray::~MarkStackArray()
{
ASSERT(m_numberOfSegments == 1 && m_segments.size() == 1);
ASSERT(m_numberOfSegments == 1);
ASSERT(m_segments.size() == 1);
m_blockAllocator.deallocate(MarkStackSegment::destroy(m_segments.removeHead()));
m_numberOfSegments--;
ASSERT(!m_numberOfSegments);
ASSERT(!m_segments.size());
}
void MarkStackArray::clear()
{
if (!m_segments.head())
return;
MarkStackSegment* next;
for (MarkStackSegment* current = m_segments.head(); current->next(); current = next) {
next = current->next();
m_segments.remove(current);
m_blockAllocator.deallocate(MarkStackSegment::destroy(current));
}
m_top = 0;
m_numberOfSegments = 1;
#if !ASSERT_DISABLED
m_segments.head()->m_top = 0;
#endif
}
void MarkStackArray::expand()
......@@ -167,4 +188,28 @@ void MarkStackArray::stealSomeCellsFrom(MarkStackArray& other, size_t idleThread
append(other.removeLast());
}
void MarkStackArray::fillVector(Vector<const JSCell*>& vector)
{
ASSERT(vector.size() == size());
MarkStackSegment* currentSegment = m_segments.head();
if (!currentSegment)
return;
unsigned count = 0;
for (unsigned i = 0; i < m_top; ++i) {
ASSERT(currentSegment->data()[i]);
vector[count++] = currentSegment->data()[i];
}
currentSegment = currentSegment->next();
while (currentSegment) {
for (unsigned i = 0; i < s_segmentCapacity; ++i) {
ASSERT(currentSegment->data()[i]);
vector[count++] = currentSegment->data()[i];
}
currentSegment = currentSegment->next();
}
}
} // namespace JSC
......@@ -52,6 +52,7 @@
#include "HeapBlock.h"
#include <wtf/StdLibExtras.h>
#include <wtf/Vector.h>
namespace JSC {
......@@ -100,6 +101,9 @@ public:
size_t size();
bool isEmpty();
void fillVector(Vector<const JSCell*>&);
void clear();
private:
template <size_t size> struct CapacityFromSize {
static const size_t value = (size - sizeof(MarkStackSegment)) / sizeof(const JSCell*);
......
......@@ -10,10 +10,10 @@
namespace JSC {
bool MarkedAllocator::isPagedOut(double deadline)
static bool isListPagedOut(double deadline, DoublyLinkedList<MarkedBlock>& list)
{
unsigned itersSinceLastTimeCheck = 0;
MarkedBlock* block = m_blockList.head();
MarkedBlock* block = list.head();
while (block) {
block = block->next();
++itersSinceLastTimeCheck;
......@@ -24,7 +24,13 @@ bool MarkedAllocator::isPagedOut(double deadline)
itersSinceLastTimeCheck = 0;
}
}
return false;
}
bool MarkedAllocator::isPagedOut(double deadline)
{
if (isListPagedOut(deadline, m_blockList))
return true;
return false;
}
......@@ -36,15 +42,23 @@ inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
while (!m_freeList.head) {
DelayedReleaseScope delayedReleaseScope(*m_markedSpace);
if (m_currentBlock) {
ASSERT(m_currentBlock == m_blocksToSweep);
ASSERT(m_currentBlock == m_nextBlockToSweep);
m_currentBlock->didConsumeFreeList();
m_blocksToSweep = m_currentBlock->next();
m_nextBlockToSweep = m_currentBlock->next();
}
for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
MarkedBlock* next;
for (MarkedBlock*& block = m_nextBlockToSweep; block; block = next) {
next = block->next();
MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (!freeList.head) {
block->didConsumeEmptyFreeList();
m_blockList.remove(block);
m_blockList.push(block);
if (!m_lastFullBlock)
m_lastFullBlock = block;
continue;
}
......@@ -68,6 +82,7 @@ inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
MarkedBlock::FreeCell* head = m_freeList.head;
m_freeList.head = head->next;
ASSERT(head);
m_markedSpace->didAllocateInBlock(m_currentBlock);
return head;
}
......@@ -136,7 +151,7 @@ void MarkedAllocator::addBlock(MarkedBlock* block)
ASSERT(!m_freeList.head);
m_blockList.append(block);
m_blocksToSweep = m_currentBlock = block;
m_nextBlockToSweep = m_currentBlock = block;
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
m_markedSpace->didAddBlock(block);
}
......@@ -147,9 +162,27 @@ void MarkedAllocator::removeBlock(MarkedBlock* block)
m_currentBlock = m_currentBlock->next();
m_freeList = MarkedBlock::FreeList();
}
if (m_blocksToSweep == block)
m_blocksToSweep = m_blocksToSweep->next();
if (m_nextBlockToSweep == block)
m_nextBlockToSweep = m_nextBlockToSweep->next();
if (block == m_lastFullBlock)
m_lastFullBlock = m_lastFullBlock->prev();
m_blockList.remove(block);
}
void MarkedAllocator::reset()
{
m_lastActiveBlock = 0;
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
if (m_heap->operationInProgress() == FullCollection)
m_lastFullBlock = 0;
if (m_lastFullBlock)
m_nextBlockToSweep = m_lastFullBlock->next() ? m_lastFullBlock->next() : m_lastFullBlock;
else
m_nextBlockToSweep = m_blockList.head();
}
} // namespace JSC
......@@ -52,7 +52,8 @@ private:
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
MarkedBlock* m_lastActiveBlock;
MarkedBlock* m_blocksToSweep;
MarkedBlock* m_nextBlockToSweep;
MarkedBlock* m_lastFullBlock;
DoublyLinkedList<MarkedBlock> m_blockList;
size_t m_cellSize;
MarkedBlock::DestructorType m_destructorType;
......@@ -68,7 +69,8 @@ inline ptrdiff_t MarkedAllocator::offsetOfFreeListHead()
inline MarkedAllocator::MarkedAllocator()
: m_currentBlock(0)
, m_lastActiveBlock(0)
, m_blocksToSweep(0)
, m_nextBlockToSweep(0)
, m_lastFullBlock(0)
, m_cellSize(0)
, m_destructorType(MarkedBlock::None)
, m_heap(0)
......@@ -102,14 +104,6 @@ inline void* MarkedAllocator::allocate(size_t bytes)
return head;
}
inline void MarkedAllocator::reset()
{
m_lastActiveBlock = 0;
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
m_blocksToSweep = m_blockList.head();
}
inline void MarkedAllocator::stopAllocating()
{
ASSERT(!m_lastActiveBlock);
......
......@@ -197,6 +197,45 @@ void MarkedBlock::stopAllocating(const FreeList& freeList)
m_state = Marked;
}
void MarkedBlock::clearMarks()
{
if (heap()->operationInProgress() == JSC::EdenCollection)
this->clearMarksWithCollectionType<EdenCollection>();
else
this->clearMarksWithCollectionType<FullCollection>();
}
void MarkedBlock::clearRememberedSet()
{
m_rememberedSet.clearAll();
}
template <HeapOperation collectionType>
void MarkedBlock::clearMarksWithCollectionType()
{
ASSERT(collectionType == FullCollection || collectionType == EdenCollection);
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
ASSERT(m_state != New && m_state != FreeListed);
if (collectionType == FullCollection) {
m_marks.clearAll();
m_rememberedSet.clearAll();
}
// This will become true at the end of the mark phase. We set it now to
// avoid an extra pass to do so later.
m_state = Marked;
}
void MarkedBlock::lastChanceToFinalize()
{
m_weakSet.lastChanceToFinalize();
clearNewlyAllocated();
clearMarksWithCollectionType<FullCollection>();
sweep();
}
MarkedBlock::FreeList MarkedBlock::resumeAllocating()
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
......
......@@ -25,6 +25,7 @@
#include "BlockAllocator.h"
#include "HeapBlock.h"
#include "HeapOperation.h"
#include "WeakSet.h"
#include <wtf/Bitmap.h>
#include <wtf/DataLog.h>
......@@ -72,7 +73,7 @@ namespace JSC {