Commit b94f6ba6 authored by ggaren@apple.com's avatar ggaren@apple.com

Allocate new objects unmarked

https://bugs.webkit.org/show_bug.cgi?id=68764

Source/JavaScriptCore: 

Reviewed by Oliver Hunt.
        
This is a pre-requisite to using the mark bit to determine object age.

~2% v8 speedup, mostly due to a 12% v8-splay speedup.

* heap/MarkedBlock.h:
(JSC::MarkedBlock::isLive):
(JSC::MarkedBlock::isLiveCell): These two functions are the reason for
this patch. They can now determine object liveness without relying on
newly allocated objects having their mark bits set. Each MarkedBlock
now has a state variable that tells us how to determine whether its
cells are live. (This new state variable supercedes the old one about
destructor state. The rest of this patch is just refactoring to support
the invariants of this new state variable without introducing a
performance regression.)

(JSC::MarkedBlock::didConsumeFreeList): New function for updating interal
state when a block becomes fully allocated.

(JSC::MarkedBlock::clearMarks): Folded a state change to 'Marked' into
this function because, logically, clearing all mark bits is the first
step in saying "mark bits now exactly reflect object liveness".

(JSC::MarkedBlock::markCountIsZero): Renamed from isEmpty() to clarify
that this function only tells you about the mark bits, so it's only
meaningful if you've put the mark bits into a meaningful state before
calling it.

(JSC::MarkedBlock::forEachCell): Changed to use isLive() helper function
instead of testing mark bits, since mark bits are not always the right
way to find out if an object is live anymore. (New objects are live, but
not marked.)

* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::recycle):
(JSC::MarkedBlock::MarkedBlock): Folded all initialization -- even
initialization when recycling an old block -- into the MarkedBlock
constructor, for simplicity.

(JSC::MarkedBlock::callDestructor): Inlined for speed. Always check for
a zapped cell before running a destructor, and always zap after
running a destructor. This does not seem to be expensive, and the
alternative just creates a too-confusing matrix of possible cell states
((zombie undestructed cell + zombie destructed cell + zapped destructed
cell) * 5! permutations for progressing through block states = "Oh my!").

(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep): Maintained and expanded a pre-existing
optimization to use template specialization to constant fold lots of
branches and elide certain operations entirely during a sweep. Merged
four or five functions that were logically about sweeping into this one
function pair, so there's only one way to do things now, it's
automatically correct, and it's always fast.

(JSC::MarkedBlock::zapFreeList): Renamed this function to be more explicit
about exactly what it does, and to honor the new block state system.

* heap/AllocationSpace.cpp:
(JSC::AllocationSpace::allocateBlock): Updated for rename.

(JSC::AllocationSpace::freeBlocks): Updated for changed interface.

(JSC::TakeIfUnmarked::TakeIfUnmarked):
(JSC::TakeIfUnmarked::operator()):
(JSC::TakeIfUnmarked::returnValue): Just like isEmpty() above, renamed
to clarify that this functor only tests the mark bits, so it's only
valid if you've put the mark bits into a meaningful state before
calling it.
        
(JSC::AllocationSpace::shrink): Updated for rename.

* heap/AllocationSpace.h:
(JSC::AllocationSpace::canonicalizeCellLivenessData): Renamed to be a
little more specific about what we're making canonical.

(JSC::AllocationSpace::forEachCell): Updated for rename.

(JSC::AllocationSpace::forEachBlock): No need to canonicalize cell
liveness data before iterating blocks -- clients that want iterated
blocks to have valid cell lieveness data should make this call for
themselves. (And not all clients want it.)

* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::genericAddPointer): Updated for rename. Removed
obsolete comment.

* heap/Heap.cpp:
(JSC::CountFunctor::ClearMarks::operator()): Removed call to notify...()
because clearMarks() now does that implicitly.

(JSC::Heap::destroy): Make sure to canonicalize before tear-down, since
tear-down tests cell liveness when running destructors.

(JSC::Heap::markRoots):
(JSC::Heap::collect): Moved weak reference harvesting out of markRoots()
and into collect, since it strictly depends on root marking, and does
not contribute to root marking.

(JSC::Heap::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.

* heap/Heap.h:
(JSC::Heap::forEachProtectedCell): No need to canonicalize cell liveness
data before iterating protected cells, since we know they're all live,
and don't need to test for it.

* heap/Local.h:
(JSC::::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.

* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::addBlock):
(JSC::MarkedSpace::removeBlock): Updated for interface change.
(JSC::MarkedSpace::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.

* heap/MarkedSpace.h:
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::SizeClass::SizeClass):
(JSC::MarkedSpace::SizeClass::resetAllocator):
(JSC::MarkedSpace::SizeClass::zapFreeList): Simplified this allocator
functionality a bit. We now track only one block -- "currentBlock" --
and rely on its internal state to know whether it has more cells to
allocate.

* heap/Weak.h:
(JSC::Weak::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.

* runtime/JSCell.h:
(JSC::JSCell::vptr):
(JSC::JSCell::zap):
(JSC::JSCell::isZapped):
(JSC::isZapped): Made zapping a property of JSCell, for a little abstraction.
In the future, exactly how a JSCell zaps itself will change, as the
internal representation of JSCell changes.

LayoutTests: 

Reviewed by Oliver Hunt.
        
Made this flaky test less flaky. (Just enough to make my patch not fail.)

* fast/dom/gc-10.html: Count objects immediately after GC to get an
exact count. Call 'reload' a few times to improve test coverage. Preload
properties in case they're lazily instantiated, which would change
object count numbers. Also, use the 'var' keyword like a good little
JavaScripter.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@95912 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 546aea6d
2011-09-24 Geoffrey Garen <ggaren@apple.com>
Allocate new objects unmarked
https://bugs.webkit.org/show_bug.cgi?id=68764
Reviewed by Oliver Hunt.
Made this flaky test less flaky. (Just enough to make my patch not fail.)
* fast/dom/gc-10.html: Count objects immediately after GC to get an
exact count. Call 'reload' a few times to improve test coverage. Preload
properties in case they're lazily instantiated, which would change
object count numbers. Also, use the 'var' keyword like a good little
JavaScripter.
2011-09-24 Adam Barth <abarth@webkit.org>
Remove ENABLE(WCSS) and associated code
......@@ -12,22 +12,24 @@ function print(message, color)
document.getElementById("console").appendChild(paragraph);
}
var before,after;
var threshold = 5;
function test()
{
if (window.GCController)
{
global = window.frames.myframe.location.reload; // Eagerly construct these properties so they don't influence test outcome.
GCController.collect();
var before = GCController.getJSObjectCount();
window.frames.myframe.location.reload(true);
window.frames.myframe.location.reload(true);
before = GCController.getJSObjectCount();
window.frames.myframe.location.reload(true);
GCController.collect();
after = GCController.getJSObjectCount();
var after = GCController.getJSObjectCount();
// Unfortunately we cannot do a strict check here because there is still very minor (3) JS object increase,
// likely due to temporary JS objects being created during further execution of this test function.
// However, the iframe document leaking everything has an addition of ~25 objects every
......
2011-09-24 Geoffrey Garen <ggaren@apple.com>
Allocate new objects unmarked
https://bugs.webkit.org/show_bug.cgi?id=68764
Reviewed by Oliver Hunt.
This is a pre-requisite to using the mark bit to determine object age.
~2% v8 speedup, mostly due to a 12% v8-splay speedup.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::isLive):
(JSC::MarkedBlock::isLiveCell): These two functions are the reason for
this patch. They can now determine object liveness without relying on
newly allocated objects having their mark bits set. Each MarkedBlock
now has a state variable that tells us how to determine whether its
cells are live. (This new state variable supercedes the old one about
destructor state. The rest of this patch is just refactoring to support
the invariants of this new state variable without introducing a
performance regression.)
(JSC::MarkedBlock::didConsumeFreeList): New function for updating interal
state when a block becomes fully allocated.
(JSC::MarkedBlock::clearMarks): Folded a state change to 'Marked' into
this function because, logically, clearing all mark bits is the first
step in saying "mark bits now exactly reflect object liveness".
(JSC::MarkedBlock::markCountIsZero): Renamed from isEmpty() to clarify
that this function only tells you about the mark bits, so it's only
meaningful if you've put the mark bits into a meaningful state before
calling it.
(JSC::MarkedBlock::forEachCell): Changed to use isLive() helper function
instead of testing mark bits, since mark bits are not always the right
way to find out if an object is live anymore. (New objects are live, but
not marked.)
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::recycle):
(JSC::MarkedBlock::MarkedBlock): Folded all initialization -- even
initialization when recycling an old block -- into the MarkedBlock
constructor, for simplicity.
(JSC::MarkedBlock::callDestructor): Inlined for speed. Always check for
a zapped cell before running a destructor, and always zap after
running a destructor. This does not seem to be expensive, and the
alternative just creates a too-confusing matrix of possible cell states
((zombie undestructed cell + zombie destructed cell + zapped destructed
cell) * 5! permutations for progressing through block states = "Oh my!").
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep): Maintained and expanded a pre-existing
optimization to use template specialization to constant fold lots of
branches and elide certain operations entirely during a sweep. Merged
four or five functions that were logically about sweeping into this one
function pair, so there's only one way to do things now, it's
automatically correct, and it's always fast.
(JSC::MarkedBlock::zapFreeList): Renamed this function to be more explicit
about exactly what it does, and to honor the new block state system.
* heap/AllocationSpace.cpp:
(JSC::AllocationSpace::allocateBlock): Updated for rename.
(JSC::AllocationSpace::freeBlocks): Updated for changed interface.
(JSC::TakeIfUnmarked::TakeIfUnmarked):
(JSC::TakeIfUnmarked::operator()):
(JSC::TakeIfUnmarked::returnValue): Just like isEmpty() above, renamed
to clarify that this functor only tests the mark bits, so it's only
valid if you've put the mark bits into a meaningful state before
calling it.
(JSC::AllocationSpace::shrink): Updated for rename.
* heap/AllocationSpace.h:
(JSC::AllocationSpace::canonicalizeCellLivenessData): Renamed to be a
little more specific about what we're making canonical.
(JSC::AllocationSpace::forEachCell): Updated for rename.
(JSC::AllocationSpace::forEachBlock): No need to canonicalize cell
liveness data before iterating blocks -- clients that want iterated
blocks to have valid cell lieveness data should make this call for
themselves. (And not all clients want it.)
* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::genericAddPointer): Updated for rename. Removed
obsolete comment.
* heap/Heap.cpp:
(JSC::CountFunctor::ClearMarks::operator()): Removed call to notify...()
because clearMarks() now does that implicitly.
(JSC::Heap::destroy): Make sure to canonicalize before tear-down, since
tear-down tests cell liveness when running destructors.
(JSC::Heap::markRoots):
(JSC::Heap::collect): Moved weak reference harvesting out of markRoots()
and into collect, since it strictly depends on root marking, and does
not contribute to root marking.
(JSC::Heap::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.
* heap/Heap.h:
(JSC::Heap::forEachProtectedCell): No need to canonicalize cell liveness
data before iterating protected cells, since we know they're all live,
and don't need to test for it.
* heap/Local.h:
(JSC::::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::addBlock):
(JSC::MarkedSpace::removeBlock): Updated for interface change.
(JSC::MarkedSpace::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::SizeClass::SizeClass):
(JSC::MarkedSpace::SizeClass::resetAllocator):
(JSC::MarkedSpace::SizeClass::zapFreeList): Simplified this allocator
functionality a bit. We now track only one block -- "currentBlock" --
and rely on its internal state to know whether it has more cells to
allocate.
* heap/Weak.h:
(JSC::Weak::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.
* runtime/JSCell.h:
(JSC::JSCell::vptr):
(JSC::JSCell::zap):
(JSC::JSCell::isZapped):
(JSC::isZapped): Made zapping a property of JSCell, for a little abstraction.
In the future, exactly how a JSCell zaps itself will change, as the
internal representation of JSCell changes.
2011-09-24 Filip Pizlo <fpizlo@apple.com>
DFG JIT should not eagerly initialize integer tags in the register file
......@@ -98,7 +98,7 @@ MarkedBlock* AllocationSpace::allocateBlock(size_t cellSize, AllocationSpace::Al
block = 0;
}
if (block)
block->initForCellSize(cellSize);
block = MarkedBlock::recycle(block, cellSize);
else if (allocationEffort == AllocationCanFail)
return 0;
else
......@@ -116,18 +116,18 @@ void AllocationSpace::freeBlocks(MarkedBlock* head)
next = block->next();
m_blocks.remove(block);
block->reset();
block->sweep();
MutexLocker locker(m_heap->m_freeBlockLock);
m_heap->m_freeBlocks.append(block);
m_heap->m_numberOfFreeBlocks++;
}
}
class TakeIfEmpty {
class TakeIfUnmarked {
public:
typedef MarkedBlock* ReturnType;
TakeIfEmpty(MarkedSpace*);
TakeIfUnmarked(MarkedSpace*);
void operator()(MarkedBlock*);
ReturnType returnValue();
......@@ -136,21 +136,21 @@ private:
DoublyLinkedList<MarkedBlock> m_empties;
};
inline TakeIfEmpty::TakeIfEmpty(MarkedSpace* newSpace)
inline TakeIfUnmarked::TakeIfUnmarked(MarkedSpace* newSpace)
: m_markedSpace(newSpace)
{
}
inline void TakeIfEmpty::operator()(MarkedBlock* block)
inline void TakeIfUnmarked::operator()(MarkedBlock* block)
{
if (!block->isEmpty())
if (!block->markCountIsZero())
return;
m_markedSpace->removeBlock(block);
m_empties.append(block);
}
inline TakeIfEmpty::ReturnType TakeIfEmpty::returnValue()
inline TakeIfUnmarked::ReturnType TakeIfUnmarked::returnValue()
{
return m_empties.head();
}
......@@ -158,8 +158,8 @@ inline TakeIfEmpty::ReturnType TakeIfEmpty::returnValue()
void AllocationSpace::shrink()
{
// We record a temporary list of empties to avoid modifying m_blocks while iterating it.
TakeIfEmpty takeIfEmpty(&m_markedSpace);
freeBlocks(forEachBlock(takeIfEmpty));
TakeIfUnmarked takeIfUnmarked(&m_markedSpace);
freeBlocks(forEachBlock(takeIfUnmarked));
}
}
......@@ -56,7 +56,7 @@ public:
template<typename Functor> typename Functor::ReturnType forEachBlock(Functor&);
template<typename Functor> typename Functor::ReturnType forEachBlock();
void canonicalizeBlocks() { m_markedSpace.canonicalizeBlocks(); }
void canonicalizeCellLivenessData() { m_markedSpace.canonicalizeCellLivenessData(); }
void resetAllocator() { m_markedSpace.resetAllocator(); }
void* allocate(size_t);
......@@ -78,7 +78,8 @@ private:
template<typename Functor> inline typename Functor::ReturnType AllocationSpace::forEachCell(Functor& functor)
{
canonicalizeBlocks();
canonicalizeCellLivenessData();
BlockIterator end = m_blocks.set().end();
for (BlockIterator it = m_blocks.set().begin(); it != end; ++it)
(*it)->forEachCell(functor);
......@@ -93,7 +94,6 @@ template<typename Functor> inline typename Functor::ReturnType AllocationSpace::
template<typename Functor> inline typename Functor::ReturnType AllocationSpace::forEachBlock(Functor& functor)
{
canonicalizeBlocks();
BlockIterator end = m_blocks.set().end();
for (BlockIterator it = m_blocks.set().begin(); it != end; ++it)
functor(*it);
......
......@@ -26,7 +26,9 @@
#include "config.h"
#include "ConservativeRoots.h"
#include "JSCell.h"
#include "JettisonedCodeBlocks.h"
#include "Structure.h"
namespace JSC {
......@@ -82,10 +84,7 @@ inline void ConservativeRoots::genericAddPointer(void* p, TinyBloomFilter filter
if (!m_blocks->set().contains(candidate))
return;
// The conservative set inverts the typical meaning of mark bits: We only
// visit marked pointers, and our visit clears the mark bit. This efficiently
// sifts out pointers to dead objects and duplicate pointers.
if (!candidate->testAndClearMarked(p))
if (!candidate->isLiveCell(p))
return;
if (m_size == m_capacity)
......
......@@ -113,7 +113,6 @@ struct ClearMarks : MarkedBlock::VoidFunctor {
inline void ClearMarks::operator()(MarkedBlock* block)
{
block->clearMarks();
block->notifyMayHaveFreshFreeCells();
}
struct Sweep : MarkedBlock::VoidFunctor {
......@@ -268,10 +267,11 @@ void Heap::destroy()
delete m_markListSet;
m_markListSet = 0;
canonicalizeCellLivenessData();
clearMarks();
m_handleHeap.finalizeWeakHandles();
m_globalData->smallStrings.finalizeSmallStrings();
shrink();
ASSERT(!size());
......@@ -514,10 +514,6 @@ void Heap::markRoots()
// If the set of opaque roots has grown, more weak handles may have become reachable.
} while (lastOpaqueRootCount != visitor.opaqueRootCount());
// Need to call this here because weak handle processing could add weak
// reference harvesters.
harvestWeakReferences();
visitor.reset();
m_operationInProgress = NoOperation;
......@@ -589,9 +585,10 @@ void Heap::collect(SweepToggle sweepToggle)
ASSERT(m_isSafeToCollect);
JAVASCRIPTCORE_GC_BEGIN();
canonicalizeBlocks();
canonicalizeCellLivenessData();
markRoots();
harvestWeakReferences();
m_handleHeap.finalizeWeakHandles();
m_globalData->smallStrings.finalizeSmallStrings();
......@@ -615,9 +612,9 @@ void Heap::collect(SweepToggle sweepToggle)
(*m_activityCallback)();
}
void Heap::canonicalizeBlocks()
void Heap::canonicalizeCellLivenessData()
{
m_objectSpace.canonicalizeBlocks();
m_objectSpace.canonicalizeCellLivenessData();
}
void Heap::resetAllocator()
......
......@@ -69,7 +69,6 @@ namespace JSC {
static bool isMarked(const void*);
static bool testAndSetMarked(const void*);
static bool testAndClearMarked(const void*);
static void setMarked(const void*);
static void writeBarrier(const JSCell*, JSValue);
......@@ -135,9 +134,13 @@ namespace JSC {
bool isValidAllocation(size_t);
void reportExtraMemoryCostSlowCase(size_t);
void canonicalizeBlocks();
void resetAllocator();
// Call this function before any operation that needs to know which cells
// in the heap are live. (For example, call this function before
// conservative marking, eager sweeping, or iterating the cells in a MarkedBlock.)
void canonicalizeCellLivenessData();
void resetAllocator();
void freeBlocks(MarkedBlock*);
void clearMarks();
......@@ -223,11 +226,6 @@ namespace JSC {
return MarkedBlock::blockFor(cell)->testAndSetMarked(cell);
}
inline bool Heap::testAndClearMarked(const void* cell)
{
return MarkedBlock::blockFor(cell)->testAndClearMarked(cell);
}
inline void Heap::setMarked(const void* cell)
{
MarkedBlock::blockFor(cell)->setMarked(cell);
......@@ -274,7 +272,6 @@ namespace JSC {
template<typename Functor> inline typename Functor::ReturnType Heap::forEachProtectedCell(Functor& functor)
{
canonicalizeBlocks();
ProtectCountSet::iterator end = m_protectedValues.end();
for (ProtectCountSet::iterator it = m_protectedValues.begin(); it != end; ++it)
functor(it->first);
......
......@@ -94,7 +94,6 @@ template <typename T> inline Local<T>& Local<T>::operator=(Handle<T> other)
template <typename T> inline void Local<T>::set(ExternalType externalType)
{
ASSERT(slot());
ASSERT(!HandleTypes<T>::toJSValue(externalType) || !HandleTypes<T>::toJSValue(externalType).isCell() || Heap::isMarked(HandleTypes<T>::toJSValue(externalType).asCell()));
*slot() = externalType;
}
......
......@@ -40,196 +40,117 @@ MarkedBlock* MarkedBlock::create(Heap* heap, size_t cellSize)
return new (allocation.base()) MarkedBlock(allocation, heap, cellSize);
}
MarkedBlock* MarkedBlock::recycle(MarkedBlock* block, size_t cellSize)
{
return new (block) MarkedBlock(block->m_allocation, block->m_heap, cellSize);
}
void MarkedBlock::destroy(MarkedBlock* block)
{
block->m_allocation.deallocate();
}
MarkedBlock::MarkedBlock(const PageAllocationAligned& allocation, Heap* heap, size_t cellSize)
: m_inNewSpace(false)
: m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
, m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
, m_state(New) // All cells start out unmarked.
, m_allocation(allocation)
, m_heap(heap)
{
initForCellSize(cellSize);
}
void MarkedBlock::initForCellSize(size_t cellSize)
{
m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
setDestructorState(SomeFreeCellsStillHaveObjects);
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
}
template<MarkedBlock::DestructorState specializedDestructorState>
void MarkedBlock::callDestructor(JSCell* cell, void* jsFinalObjectVPtr)
inline void MarkedBlock::callDestructor(JSCell* cell, void* jsFinalObjectVPtr)
{
if (specializedDestructorState == FreeCellsDontHaveObjects)
// A previous eager sweep may already have run cell's destructor.
if (cell->isZapped())
return;
void* vptr = cell->vptr();
if (specializedDestructorState == AllFreeCellsHaveObjects || vptr) {
#if ENABLE(SIMPLE_HEAP_PROFILING)
m_heap->m_destroyedTypeCounts.countVPtr(vptr);
m_heap->m_destroyedTypeCounts.countVPtr(vptr);
#endif
if (vptr == jsFinalObjectVPtr) {
JSFinalObject* object = reinterpret_cast<JSFinalObject*>(cell);
object->JSFinalObject::~JSFinalObject();
} else
cell->~JSCell();
}
}
if (vptr == jsFinalObjectVPtr)
reinterpret_cast<JSFinalObject*>(cell)->JSFinalObject::~JSFinalObject();
else
cell->~JSCell();
template<MarkedBlock::DestructorState specializedDestructorState>
void MarkedBlock::specializedReset()
{
void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell)
callDestructor<specializedDestructorState>(reinterpret_cast<JSCell*>(&atoms()[i]), jsFinalObjectVPtr);
cell->zap();
}
void MarkedBlock::reset()
template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode>
MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
{
switch (destructorState()) {
case FreeCellsDontHaveObjects:
case SomeFreeCellsStillHaveObjects:
specializedReset<SomeFreeCellsStillHaveObjects>();
break;
default:
ASSERT(destructorState() == AllFreeCellsHaveObjects);
specializedReset<AllFreeCellsHaveObjects>();
break;
}
}
template<MarkedBlock::DestructorState specializedDestructorState>
void MarkedBlock::specializedSweep()
{
if (specializedDestructorState != FreeCellsDontHaveObjects) {
void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
if (m_marks.get(i))
continue;
JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
callDestructor<specializedDestructorState>(cell, jsFinalObjectVPtr);
cell->setVPtr(0);
}
setDestructorState(FreeCellsDontHaveObjects);
}
}
ASSERT(blockState != Allocated && blockState != FreeListed);
void MarkedBlock::sweep()
{
HEAP_DEBUG_BLOCK(this);
switch (destructorState()) {
case FreeCellsDontHaveObjects:
break;
case SomeFreeCellsStillHaveObjects:
specializedSweep<SomeFreeCellsStillHaveObjects>();
break;
default:
ASSERT(destructorState() == AllFreeCellsHaveObjects);
specializedSweep<AllFreeCellsHaveObjects>();
break;
}
}
template<MarkedBlock::DestructorState specializedDestructorState>
ALWAYS_INLINE MarkedBlock::FreeCell* MarkedBlock::produceFreeList()
{
// This returns a free list that is ordered in reverse through the block.
// This produces a free list that is ordered in reverse through the block.
// This is fine, since the allocation code makes no assumptions about the
// order of the free list.
FreeCell* head = 0;
void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
FreeCell* result = 0;
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
if (!m_marks.testAndSet(i)) {
JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
if (specializedDestructorState != FreeCellsDontHaveObjects)
callDestructor<specializedDestructorState>(cell, jsFinalObjectVPtr);
if (blockState == Marked && m_marks.get(i))
continue;
JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
if (blockState == Zapped && !cell->isZapped())
continue;
if (blockState != New)
callDestructor(cell, jsFinalObjectVPtr);
if (sweepMode == SweepToFreeList) {
FreeCell* freeCell = reinterpret_cast<FreeCell*>(cell);
freeCell->next = result;
result = freeCell;
freeCell->next = head;
head = freeCell;
}
}
// This is sneaky: if we're producing a free list then we intend to
// fill up the free cells in the block with objects, which means that
// if we have a new GC then all of the free stuff in this block will
// comprise objects rather than empty cells.
setDestructorState(AllFreeCellsHaveObjects);
return result;
m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Zapped);
return head;
}
MarkedBlock::FreeCell* MarkedBlock::lazySweep()
MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
{
// This returns a free list that is ordered in reverse through the block.
// This is fine, since the allocation code makes no assumptions about the
// order of the free list.
HEAP_DEBUG_BLOCK(this);
switch (destructorState()) {
case FreeCellsDontHaveObjects:
return produceFreeList<FreeCellsDontHaveObjects>();
case SomeFreeCellsStillHaveObjects:
return produceFreeList<SomeFreeCellsStillHaveObjects>();
default:
ASSERT(destructorState() == AllFreeCellsHaveObjects);
return produceFreeList<AllFreeCellsHaveObjects>();
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
switch (m_state) {
case New:
ASSERT(sweepMode == SweepToFreeList);
return specializedSweep<New, SweepToFreeList>();
case FreeListed:
// Happens when a block transitions to fully allocated.
ASSERT(sweepMode == SweepToFreeList);
return 0;
case Allocated:
ASSERT_NOT_REACHED();
return 0;
case Marked:
return sweepMode == SweepToFreeList
? specializedSweep<Marked, SweepToFreeList>()
: specializedSweep<Marked, SweepOnly>();
case Zapped:
return sweepMode == SweepToFreeList
? specializedSweep<Zapped, SweepToFreeList>()
: specializedSweep<Zapped, SweepOnly>();
}