Commit 3ddd7ac6 authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

Marking should be generational

https://bugs.webkit.org/show_bug.cgi?id=126552

Reviewed by Geoffrey Garen.

Source/JavaScriptCore: 

Re-marking the same objects over and over is a waste of effort. This patch implements 
the sticky mark bit algorithm (along with our already-present write barriers) to reduce 
overhead during garbage collection caused by rescanning objects.

There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their 
generation.

In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in 
https://bugs.webkit.org/show_bug.cgi?id=126555.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):

Source/WTF: 

* wtf/Bitmap.h:
(WTF::WordType>::count): Added a cast that became necessary when Bitmap
is used with smaller types than int32_t.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@161615 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent e3271440
2014-01-07 Mark Hahnenberg <mhahnenberg@apple.com>
Marking should be generational
https://bugs.webkit.org/show_bug.cgi?id=126552
Reviewed by Geoffrey Garen.
Re-marking the same objects over and over is a waste of effort. This patch implements
the sticky mark bit algorithm (along with our already-present write barriers) to reduce
overhead during garbage collection caused by rescanning objects.
There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their
generation.
In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in
https://bugs.webkit.org/show_bug.cgi?id=126555.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):
2014-01-09 Joseph Pecoraro <pecoraro@apple.com> 2014-01-09 Joseph Pecoraro <pecoraro@apple.com>
Unreviewed Windows build fix for r161563. Unreviewed Windows build fix for r161563.
......
...@@ -1954,15 +1954,15 @@ void CodeBlock::visitAggregate(SlotVisitor& visitor) ...@@ -1954,15 +1954,15 @@ void CodeBlock::visitAggregate(SlotVisitor& visitor)
if (CodeBlock* otherBlock = specialOSREntryBlockOrNull()) if (CodeBlock* otherBlock = specialOSREntryBlockOrNull())
otherBlock->visitAggregate(visitor); otherBlock->visitAggregate(visitor);
visitor.reportExtraMemoryUsage(sizeof(CodeBlock)); visitor.reportExtraMemoryUsage(ownerExecutable(), sizeof(CodeBlock));
if (m_jitCode) if (m_jitCode)
visitor.reportExtraMemoryUsage(m_jitCode->size()); visitor.reportExtraMemoryUsage(ownerExecutable(), m_jitCode->size());
if (m_instructions.size()) { if (m_instructions.size()) {
// Divide by refCount() because m_instructions points to something that is shared // Divide by refCount() because m_instructions points to something that is shared
// by multiple CodeBlocks, and we only want to count it towards the heap size once. // by multiple CodeBlocks, and we only want to count it towards the heap size once.
// Having each CodeBlock report only its proportional share of the size is one way // Having each CodeBlock report only its proportional share of the size is one way
// of accomplishing this. // of accomplishing this.
visitor.reportExtraMemoryUsage(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount()); visitor.reportExtraMemoryUsage(ownerExecutable(), m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
} }
visitor.append(&m_unlinkedCode); visitor.append(&m_unlinkedCode);
......
...@@ -1269,6 +1269,9 @@ inline void CodeBlockSet::mark(void* candidateCodeBlock) ...@@ -1269,6 +1269,9 @@ inline void CodeBlockSet::mark(void* candidateCodeBlock)
return; return;
(*iter)->m_mayBeExecuting = true; (*iter)->m_mayBeExecuting = true;
#if ENABLE(GGC)
m_currentlyExecuting.append(static_cast<CodeBlock*>(candidateCodeBlock));
#endif
} }
} // namespace JSC } // namespace JSC
......
...@@ -850,6 +850,7 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial ...@@ -850,6 +850,7 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial
NativeCallFrameTracer tracer(&vm, exec); NativeCallFrameTracer tracer(&vm, exec);
ASSERT(!object->structure()->outOfLineCapacity()); ASSERT(!object->structure()->outOfLineCapacity());
DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity); Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity);
object->setButterflyWithoutChangingStructure(vm, result); object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result); return reinterpret_cast<char*>(result);
...@@ -860,6 +861,7 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState* ...@@ -860,6 +861,7 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*
VM& vm = exec->vm(); VM& vm = exec->vm();
NativeCallFrameTracer tracer(&vm, exec); NativeCallFrameTracer tracer(&vm, exec);
DeferGC deferGC(vm.heap);
Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize); Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize);
object->setButterflyWithoutChangingStructure(vm, result); object->setButterflyWithoutChangingStructure(vm, result);
return reinterpret_cast<char*>(result); return reinterpret_cast<char*>(result);
......
...@@ -45,7 +45,8 @@ CodeBlockSet::~CodeBlockSet() ...@@ -45,7 +45,8 @@ CodeBlockSet::~CodeBlockSet()
void CodeBlockSet::add(PassRefPtr<CodeBlock> codeBlock) void CodeBlockSet::add(PassRefPtr<CodeBlock> codeBlock)
{ {
bool isNewEntry = m_set.add(codeBlock.leakRef()).isNewEntry; CodeBlock* block = codeBlock.leakRef();
bool isNewEntry = m_set.add(block).isNewEntry;
ASSERT_UNUSED(isNewEntry, isNewEntry); ASSERT_UNUSED(isNewEntry, isNewEntry);
} }
...@@ -101,9 +102,20 @@ void CodeBlockSet::traceMarked(SlotVisitor& visitor) ...@@ -101,9 +102,20 @@ void CodeBlockSet::traceMarked(SlotVisitor& visitor)
CodeBlock* codeBlock = *iter; CodeBlock* codeBlock = *iter;
if (!codeBlock->m_mayBeExecuting) if (!codeBlock->m_mayBeExecuting)
continue; continue;
codeBlock->visitAggregate(visitor); codeBlock->ownerExecutable()->methodTable()->visitChildren(codeBlock->ownerExecutable(), visitor);
} }
} }
void CodeBlockSet::rememberCurrentlyExecutingCodeBlocks(Heap* heap)
{
#if ENABLE(GGC)
for (size_t i = 0; i < m_currentlyExecuting.size(); ++i)
heap->addToRememberedSet(m_currentlyExecuting[i]->ownerExecutable());
m_currentlyExecuting.clear();
#else
UNUSED_PARAM(heap);
#endif // ENABLE(GGC)
}
} // namespace JSC } // namespace JSC
...@@ -30,10 +30,12 @@ ...@@ -30,10 +30,12 @@
#include <wtf/Noncopyable.h> #include <wtf/Noncopyable.h>
#include <wtf/PassRefPtr.h> #include <wtf/PassRefPtr.h>
#include <wtf/RefPtr.h> #include <wtf/RefPtr.h>
#include <wtf/Vector.h>
namespace JSC { namespace JSC {
class CodeBlock; class CodeBlock;
class Heap;
class SlotVisitor; class SlotVisitor;
// CodeBlockSet tracks all CodeBlocks. Every CodeBlock starts out with one // CodeBlockSet tracks all CodeBlocks. Every CodeBlock starts out with one
...@@ -65,11 +67,16 @@ public: ...@@ -65,11 +67,16 @@ public:
// mayBeExecuting. // mayBeExecuting.
void traceMarked(SlotVisitor&); void traceMarked(SlotVisitor&);
// Add all currently executing CodeBlocks to the remembered set to be
// re-scanned during the next collection.
void rememberCurrentlyExecutingCodeBlocks(Heap*);
private: private:
// This is not a set of RefPtr<CodeBlock> because we need to be able to find // This is not a set of RefPtr<CodeBlock> because we need to be able to find
// arbitrary bogus pointers. I could have written a thingy that had peek types // arbitrary bogus pointers. I could have written a thingy that had peek types
// and all, but that seemed like overkill. // and all, but that seemed like overkill.
HashSet<CodeBlock* > m_set; HashSet<CodeBlock* > m_set;
Vector<CodeBlock*> m_currentlyExecuting;
}; };
} // namespace JSC } // namespace JSC
......
...@@ -42,6 +42,9 @@ inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigne ...@@ -42,6 +42,9 @@ inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigne
#endif #endif
m_liveBytes += bytes; m_liveBytes += bytes;
if (isPinned())
return;
if (!shouldEvacuate()) { if (!shouldEvacuate()) {
pin(); pin();
return; return;
......
...@@ -316,4 +316,17 @@ bool CopiedSpace::isPagedOut(double deadline) ...@@ -316,4 +316,17 @@ bool CopiedSpace::isPagedOut(double deadline)
|| isBlockListPagedOut(deadline, &m_oversizeBlocks); || isBlockListPagedOut(deadline, &m_oversizeBlocks);
} }
void CopiedSpace::didStartFullCollection()
{
ASSERT(heap()->operationInProgress() == FullCollection);
ASSERT(m_fromSpace->isEmpty());
for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
block->didSurviveGC();
for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
block->didSurviveGC();
}
} // namespace JSC } // namespace JSC
...@@ -60,6 +60,8 @@ public: ...@@ -60,6 +60,8 @@ public:
CopiedAllocator& allocator() { return m_allocator; } CopiedAllocator& allocator() { return m_allocator; }
void didStartFullCollection();
void startedCopying(); void startedCopying();
void doneCopying(); void doneCopying();
bool isInCopyPhase() { return m_inCopyingPhase; } bool isInCopyPhase() { return m_inCopyingPhase; }
...@@ -80,6 +82,8 @@ public: ...@@ -80,6 +82,8 @@ public:
static CopiedBlock* blockFor(void*); static CopiedBlock* blockFor(void*);
Heap* heap() const { return m_heap; }
private: private:
static bool isOversize(size_t); static bool isOversize(size_t);
......
...@@ -253,9 +253,11 @@ Heap::Heap(VM* vm, HeapType heapType) ...@@ -253,9 +253,11 @@ Heap::Heap(VM* vm, HeapType heapType)
, m_ramSize(ramSize()) , m_ramSize(ramSize())
, m_minBytesPerCycle(minHeapSize(m_heapType, m_ramSize)) , m_minBytesPerCycle(minHeapSize(m_heapType, m_ramSize))
, m_sizeAfterLastCollect(0) , m_sizeAfterLastCollect(0)
, m_bytesAllocatedLimit(m_minBytesPerCycle) , m_bytesAllocatedThisCycle(0)
, m_bytesAllocated(0) , m_bytesAbandonedThisCycle(0)
, m_bytesAbandoned(0) , m_maxEdenSize(m_minBytesPerCycle)
, m_maxHeapSize(m_minBytesPerCycle)
, m_shouldDoFullCollection(false)
, m_totalBytesVisited(0) , m_totalBytesVisited(0)
, m_totalBytesCopied(0) , m_totalBytesCopied(0)
, m_operationInProgress(NoOperation) , m_operationInProgress(NoOperation)
...@@ -269,7 +271,7 @@ Heap::Heap(VM* vm, HeapType heapType) ...@@ -269,7 +271,7 @@ Heap::Heap(VM* vm, HeapType heapType)
, m_copyVisitor(m_sharedData) , m_copyVisitor(m_sharedData)
, m_handleSet(vm) , m_handleSet(vm)
, m_isSafeToCollect(false) , m_isSafeToCollect(false)
, m_writeBarrierBuffer(128) , m_writeBarrierBuffer(256)
, m_vm(vm) , m_vm(vm)
, m_lastGCLength(0) , m_lastGCLength(0)
, m_lastCodeDiscardTime(WTF::monotonicallyIncreasingTime()) , m_lastCodeDiscardTime(WTF::monotonicallyIncreasingTime())
...@@ -332,8 +334,8 @@ void Heap::reportAbandonedObjectGraph() ...@@ -332,8 +334,8 @@ void Heap::reportAbandonedObjectGraph()
void Heap::didAbandon(size_t bytes) void Heap::didAbandon(size_t bytes)
{ {
if (m_activityCallback) if (m_activityCallback)
m_activityCallback->didAllocate(m_bytesAllocated + m_bytesAbandoned); m_activityCallback->didAllocate(m_bytesAllocatedThisCycle + m_bytesAbandonedThisCycle);
m_bytesAbandoned += bytes; m_bytesAbandonedThisCycle += bytes;
} }
void Heap::protect(JSValue k) void Heap::protect(JSValue k)
...@@ -487,6 +489,9 @@ void Heap::markRoots() ...@@ -487,6 +489,9 @@ void Heap::markRoots()
visitor.setup(); visitor.setup();
HeapRootVisitor heapRootVisitor(visitor); HeapRootVisitor heapRootVisitor(visitor);
Vector<const JSCell*> rememberedSet(m_slotVisitor.markStack().size());
m_slotVisitor.markStack().fillVector(rememberedSet);
{ {
ParallelModeEnabler enabler(visitor); ParallelModeEnabler enabler(visitor);
...@@ -590,6 +595,14 @@ void Heap::markRoots() ...@@ -590,6 +595,14 @@ void Heap::markRoots()
} }
} }
{
GCPHASE(ClearRememberedSet);
for (unsigned i = 0; i < rememberedSet.size(); ++i) {
const JSCell* cell = rememberedSet[i];
MarkedBlock::blockFor(cell)->clearRemembered(cell);
}
}
GCCOUNTER(VisitedValueCount, visitor.visitCount()); GCCOUNTER(VisitedValueCount, visitor.visitCount());
m_sharedData.didFinishMarking(); m_sharedData.didFinishMarking();
...@@ -601,8 +614,14 @@ void Heap::markRoots() ...@@ -601,8 +614,14 @@ void Heap::markRoots()
MARK_LOG_MESSAGE2("\nNumber of live Objects after full GC %lu, took %.6f secs\n", visitCount, WTF::monotonicallyIncreasingTime() - gcStartTime); MARK_LOG_MESSAGE2("\nNumber of live Objects after full GC %lu, took %.6f secs\n", visitCount, WTF::monotonicallyIncreasingTime() - gcStartTime);
#endif #endif
m_totalBytesVisited = visitor.bytesVisited(); if (m_operationInProgress == EdenCollection) {
m_totalBytesCopied = visitor.bytesCopied(); m_totalBytesVisited += visitor.bytesVisited();
m_totalBytesCopied += visitor.bytesCopied();
} else {
ASSERT(m_operationInProgress == FullCollection);
m_totalBytesVisited = visitor.bytesVisited();
m_totalBytesCopied = visitor.bytesCopied();
}
#if ENABLE(PARALLEL_GC) #if ENABLE(PARALLEL_GC)
m_totalBytesVisited += m_sharedData.childBytesVisited(); m_totalBytesVisited += m_sharedData.childBytesVisited();
m_totalBytesCopied += m_sharedData.childBytesCopied(); m_totalBytesCopied += m_sharedData.childBytesCopied();
...@@ -615,8 +634,12 @@ void Heap::markRoots() ...@@ -615,8 +634,12 @@ void Heap::markRoots()
m_sharedData.reset(); m_sharedData.reset();
} }
template <HeapOperation collectionType>
void Heap::copyBackingStores() void Heap::copyBackingStores()
{ {
if (collectionType == EdenCollection)
return;
m_storageSpace.startedCopying(); m_storageSpace.startedCopying();
if (m_storageSpace.shouldDoCopyPhase()) { if (m_storageSpace.shouldDoCopyPhase()) {
m_sharedData.didStartCopying(); m_sharedData.didStartCopying();
...@@ -627,7 +650,7 @@ void Heap::copyBackingStores() ...@@ -627,7 +650,7 @@ void Heap::copyBackingStores()
// before signaling that the phase is complete. // before signaling that the phase is complete.
m_storageSpace.doneCopying(); m_storageSpace.doneCopying();
m_sharedData.didFinishCopying(); m_sharedData.didFinishCopying();
} else } else
m_storageSpace.doneCopying(); m_storageSpace.doneCopying();
} }
...@@ -723,11 +746,22 @@ void Heap::deleteUnmarkedCompiledCode() ...@@ -723,11 +746,22 @@ void Heap::deleteUnmarkedCompiledCode()
m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines(); m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines();
} }
void Heap::addToRememberedSet(const JSCell* cell)
{
ASSERT(cell);
ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
if (isInRememberedSet(cell))
return;
MarkedBlock::blockFor(cell)->setRemembered(cell);
m_slotVisitor.unconditionallyAppend(const_cast<JSCell*>(cell));
}
void Heap::collectAllGarbage() void Heap::collectAllGarbage()
{ {
if (!m_isSafeToCollect) if (!m_isSafeToCollect)
return; return;
m_shouldDoFullCollection = true;
collect(); collect();
SamplingRegion samplingRegion("Garbage Collection: Sweeping"); SamplingRegion samplingRegion("Garbage Collection: Sweeping");
...@@ -764,9 +798,28 @@ void Heap::collect() ...@@ -764,9 +798,28 @@ void Heap::collect()
RecursiveAllocationScope scope(*this); RecursiveAllocationScope scope(*this);
m_vm->prepareToDiscardCode(); m_vm->prepareToDiscardCode();
} }
m_operationInProgress = Collection; bool isFullCollection = m_shouldDoFullCollection;
m_extraMemoryUsage = 0; if (isFullCollection) {
m_operationInProgress = FullCollection;
m_slotVisitor.clearMarkStack();
m_shouldDoFullCollection = false;
if (Options::logGC())
dataLog("FullCollection, ");
} else {
#if ENABLE(GGC)
m_operationInProgress = EdenCollection;
if (Options::logGC())
dataLog("EdenCollection, ");
#else
m_operationInProgress = FullCollection;
m_slotVisitor.clearMarkStack();
if (Options::logGC())
dataLog("FullCollection, ");
#endif
}
if (m_operationInProgress == FullCollection)
m_extraMemoryUsage = 0;
if (m_activityCallback) if (m_activityCallback)
m_activityCallback->willCollect(); m_activityCallback->willCollect();
...@@ -780,6 +833,16 @@ void Heap::collect() ...@@ -780,6 +833,16 @@ void Heap::collect()
{ {
GCPHASE(StopAllocation); GCPHASE(StopAllocation);
m_objectSpace.stopAllocating(); m_objectSpace.stopAllocating();
if (m_operationInProgress == FullCollection)
m_storageSpace.didStartFullCollection();
}
{
GCPHASE(FlushWriteBarrierBuffer);
if (m_operationInProgress == EdenCollection)
m_writeBarrierBuffer.flush(*this);
else
m_writeBarrierBuffer.reset();
} }
markRoots(); markRoots();
...@@ -796,13 +859,16 @@ void Heap::collect() ...@@ -796,13 +859,16 @@ void Heap::collect()
m_arrayBuffers.sweep(); m_arrayBuffers.sweep();
} }
{ if (m_operationInProgress == FullCollection) {
m_blockSnapshot.resize(m_objectSpace.blocks().set().size()); m_blockSnapshot.resize(m_objectSpace.blocks().set().size());
MarkedBlockSnapshotFunctor functor(m_blockSnapshot); MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
m_objectSpace.forEachBlock(functor); m_objectSpace.forEachBlock(functor);
} }
copyBackingStores(); if (m_operationInProgress == FullCollection)
copyBackingStores<FullCollection>();
else
copyBackingStores<EdenCollection>();
{ {
GCPHASE(FinalizeUnconditionalFinalizers); GCPHASE(FinalizeUnconditionalFinalizers);
...@@ -819,8 +885,15 @@ void Heap::collect() ...@@ -819,8 +885,15 @@ void Heap::collect()
m_vm->clearSourceProviderCaches(); m_vm->clearSourceProviderCaches();
} }
m_sweeper->startSweeping(m_blockSnapshot); if (m_operationInProgress == FullCollection)
m_bytesAbandoned = 0; m_sweeper->startSweeping(m_blockSnapshot);
{
GCPHASE(AddCurrentlyExecutingCodeBlocksToRememberedSet);
m_codeBlocks.rememberCurrentlyExecutingCodeBlocks(this);
}
m_bytesAbandonedThisCycle = 0;
{ {
GCPHASE(ResetAllocators); GCPHASE(ResetAllocators);
...@@ -831,21 +904,32 @@ void Heap::collect() ...@@ -831,21 +904,32 @@ void Heap::collect()
if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize()) if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
HeapStatistics::exitWithFailure(); HeapStatistics::exitWithFailure();
m_sizeAfterLastCollect = currentHeapSize; if (m_operationInProgress == FullCollection) {
// To avoid pathological GC churn in very small and very large heaps, we set
// the new allocation limit based on the current size of the heap, with a
// fixed minimum.
m_maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
m_maxEdenSize = m_maxHeapSize - currentHeapSize;
} else {
ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
m_maxEdenSize = m_maxHeapSize - currentHeapSize;
double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
double minEdenToOldGenerationRatio = 1.0 / 3.0;
if (edenToOldGenerationRatio < minEdenToOldGenerationRatio)
m_shouldDoFullCollection = true;
m_maxHeapSize += currentHeapSize - m_sizeAfterLastCollect;
m_maxEdenSize = m_maxHeapSize - currentHeapSize;
}
// To avoid pathological GC churn in very small and very large heaps, we set m_sizeAfterLastCollect = currentHeapSize;
// the new allocation limit based on the current size of the heap, with a
// fixed minimum.
size_t maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
m_bytesAllocatedLimit = maxHeapSize - currentHeapSize;
m_bytesAllocated = 0; m_bytesAllocatedThisCycle = 0;
double lastGCEndTime = WTF::monotonicallyIncreasingTime(); double lastGCEndTime = WTF::monotonicallyIncreasingTime();
m_lastGCLength = lastGCEndTime - lastGCStartTime; m_lastGCLength = lastGCEndTime - lastGCStartTime;
if (Options::recordGCPauseTimes()) if (Options::recordGCPauseTimes())
HeapStatistics::recordGCPauseTime(lastGCStartTime, lastGCEndTime); HeapStatistics::recordGCPauseTime(lastGCStartTime, lastGCEndTime);
RELEASE_ASSERT(m_operationInProgress == Collection); RELEASE_ASSERT(m_operationInProgress == EdenCollection || m_operationInProgress == FullCollection);
m_operationInProgress = NoOperation; m_operationInProgress = NoOperation;
JAVASCRIPTCORE_GC_END(); JAVASCRIPTCORE_GC_END();
...@@ -863,10 +947,6 @@ void Heap::collect() ...@@ -863,10 +947,6 @@ void Heap::collect()
double after = currentTimeMS(); double after