Commit 76e50b10 authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

Remove the Zapped BlockState

https://bugs.webkit.org/show_bug.cgi?id=96708

Reviewed by Geoffrey Garen.

The Zapped block state is rather confusing. It indicates that a block is in one of two different states that we
can't tell the difference between:

1) I have run all destructors of things that are zapped, and I have not allocated any more objects. This block
   is ready for reclaiming if you so choose.
2) I have run all the destructors of things that are zapped, but I have allocated more stuff since then, so it
   is not safe to reclaim this block.

This state adds a lot of complexity to our state transition model for MarkedBlocks. We should get rid of it.
We can replace this state by making sure mark bits represent all of the liveness information we need when running
our conservative stack scan. Instead of zapping the free list when canonicalizing cell liveness data prior to
a conservative scan, we can instead mark all objects in the block except for those in the free list. This should
incur no performance penalty since we're doing it on a very small O(1) number of blocks at the beginning of the collection.

For the time being we still need to use zapping to determine whether we have run an object's destructor or not.

* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): Renaming stuff.
* heap/MarkedAllocator.h: Renamed zapFreeList to canonicalizeCellLivenessData to match.
(MarkedAllocator):
(JSC::MarkedAllocator::canonicalizeCellLivenessData): Same as old zapFreeList, but just call canonicalize instead.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::specializedSweep): Remove the check for Zapped block stuff. Also change the block state to Marked
instead of Zapped if we're not producing a FreeList since that's the only other state that really makes any sense.
(JSC::MarkedBlock::sweepHelper): Remove Zapped related code.
(SetAllMarksFunctor): Functor to set all the mark bits in the block since there's not a simple function to call on
the Bitmap itself.
(JSC::SetAllMarksFunctor::operator()):
(JSC):
(JSC::MarkedBlock::canonicalizeCellLivenessData): Remove all the stuff for Zapped. For FreeListed, set all the mark bits
and then clear the ones for the objects in the FreeList. This ensures that only the things that were in the FreeList
are considered to be dead by the conservative scan, just like if we were to have zapped the FreeList like before.
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::clearMarked): Add function to clear individual mark bits, since we need that functionality now.
(JSC):
(JSC::MarkedBlock::isLive): Remove code for Zapped stuff. Marked handles all interesting cases now.
(JSC::MarkedBlock::forEachCell): Add new iterator function that iterates over all cells in the block, regardless of
whether they're live or a dead.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::canonicalizeCellLivenessData): Change to call the renamed canonicalize function.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@128563 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 2338b740
2012-09-14 Mark Hahnenberg <mhahnenberg@apple.com>
Remove the Zapped BlockState
https://bugs.webkit.org/show_bug.cgi?id=96708
Reviewed by Geoffrey Garen.
The Zapped block state is rather confusing. It indicates that a block is in one of two different states that we
can't tell the difference between:
1) I have run all destructors of things that are zapped, and I have not allocated any more objects. This block
is ready for reclaiming if you so choose.
2) I have run all the destructors of things that are zapped, but I have allocated more stuff since then, so it
is not safe to reclaim this block.
This state adds a lot of complexity to our state transition model for MarkedBlocks. We should get rid of it.
We can replace this state by making sure mark bits represent all of the liveness information we need when running
our conservative stack scan. Instead of zapping the free list when canonicalizing cell liveness data prior to
a conservative scan, we can instead mark all objects in the block except for those in the free list. This should
incur no performance penalty since we're doing it on a very small O(1) number of blocks at the beginning of the collection.
For the time being we still need to use zapping to determine whether we have run an object's destructor or not.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): Renaming stuff.
* heap/MarkedAllocator.h: Renamed zapFreeList to canonicalizeCellLivenessData to match.
(MarkedAllocator):
(JSC::MarkedAllocator::canonicalizeCellLivenessData): Same as old zapFreeList, but just call canonicalize instead.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::specializedSweep): Remove the check for Zapped block stuff. Also change the block state to Marked
instead of Zapped if we're not producing a FreeList since that's the only other state that really makes any sense.
(JSC::MarkedBlock::sweepHelper): Remove Zapped related code.
(SetAllMarksFunctor): Functor to set all the mark bits in the block since there's not a simple function to call on
the Bitmap itself.
(JSC::SetAllMarksFunctor::operator()):
(JSC):
(JSC::MarkedBlock::canonicalizeCellLivenessData): Remove all the stuff for Zapped. For FreeListed, set all the mark bits
and then clear the ones for the objects in the FreeList. This ensures that only the things that were in the FreeList
are considered to be dead by the conservative scan, just like if we were to have zapped the FreeList like before.
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::clearMarked): Add function to clear individual mark bits, since we need that functionality now.
(JSC):
(JSC::MarkedBlock::isLive): Remove code for Zapped stuff. Marked handles all interesting cases now.
(JSC::MarkedBlock::forEachCell): Add new iterator function that iterates over all cells in the block, regardless of
whether they're live or a dead.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::canonicalizeCellLivenessData): Change to call the renamed canonicalize function.
2012-09-13 Kevin Funk <kevin.funk@kdab.com>
Make compile with both OS(WINCE) and PLATFORM(QT) support
......
......@@ -49,7 +49,7 @@ inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
}
if (bytes > block->cellSize()) {
block->zapFreeList(freeList);
block->canonicalizeCellLivenessData(freeList);
continue;
}
......
......@@ -21,7 +21,7 @@ class MarkedAllocator {
public:
MarkedAllocator();
void reset();
void zapFreeList();
void canonicalizeCellLivenessData();
size_t cellSize() { return m_cellSize; }
bool cellsNeedDestruction() { return m_cellsNeedDestruction; }
bool onlyContainsStructures() { return m_onlyContainsStructures; }
......@@ -92,14 +92,14 @@ inline void MarkedAllocator::reset()
m_blocksToSweep = m_blockList.head();
}
inline void MarkedAllocator::zapFreeList()
inline void MarkedAllocator::canonicalizeCellLivenessData()
{
if (!m_currentBlock) {
ASSERT(!m_freeList.head);
return;
}
m_currentBlock->zapFreeList(m_freeList);
m_currentBlock->canonicalizeCellLivenessData(m_freeList);
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
}
......
......@@ -81,8 +81,6 @@ MarkedBlock::FreeList MarkedBlock::specializedSweep()
continue;
JSCell* cell = reinterpret_cast_ptr<JSCell*>(&atoms()[i]);
if (blockState == Zapped && !cell->isZapped())
continue;
if (destructorCallNeeded && blockState != New)
callDestructor(cell);
......@@ -95,7 +93,7 @@ MarkedBlock::FreeList MarkedBlock::specializedSweep()
}
}
m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Zapped);
m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Marked);
return FreeList(head, count * cellSize());
}
......@@ -132,18 +130,21 @@ MarkedBlock::FreeList MarkedBlock::sweepHelper(SweepMode sweepMode)
return sweepMode == SweepToFreeList
? specializedSweep<Marked, SweepToFreeList, destructorCallNeeded>()
: specializedSweep<Marked, SweepOnly, destructorCallNeeded>();
case Zapped:
ASSERT(!m_onlyContainsStructures || heap()->isSafeToSweepStructures());
return sweepMode == SweepToFreeList
? specializedSweep<Zapped, SweepToFreeList, destructorCallNeeded>()
: specializedSweep<Zapped, SweepOnly, destructorCallNeeded>();
}
ASSERT_NOT_REACHED();
return FreeList();
}
void MarkedBlock::zapFreeList(const FreeList& freeList)
class SetAllMarksFunctor : public MarkedBlock::VoidFunctor {
public:
void operator()(JSCell* cell)
{
MarkedBlock::blockFor(cell)->setMarked(cell);
}
};
void MarkedBlock::canonicalizeCellLivenessData(const FreeList& freeList)
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
FreeCell* head = freeList.head;
......@@ -156,40 +157,26 @@ void MarkedBlock::zapFreeList(const FreeList& freeList)
// Hence if the block is Marked we need to leave it Marked.
ASSERT(!head);
return;
}
if (m_state == Zapped) {
// If the block is in the Zapped state then we know that someone already
// zapped it for us. This could not have happened during a GC, but might
// be the result of someone having done a GC scan to perform some operation
// over all live objects (or all live blocks). It also means that somebody
// had allocated in this block since the last GC, swept all dead objects
// onto the free list, left the block in the FreeListed state, then the heap
// scan happened, and canonicalized the block, leading to all dead objects
// being zapped. Therefore, it is safe for us to simply do nothing, since
// dead objects will have 0 in their vtables and live objects will have
// non-zero vtables, which is consistent with the block being zapped.
ASSERT(!head);
return;
}
ASSERT(m_state == FreeListed);
// Roll back to a coherent state for Heap introspection. Cells newly
// allocated from our free list are not currently marked, so we need another
// way to tell what's live vs dead. We use zapping for that.
// way to tell what's live vs dead.
SetAllMarksFunctor functor;
forEachCell(functor);
FreeCell* next;
for (FreeCell* current = head; current; current = next) {
next = current->next;
reinterpret_cast<JSCell*>(current)->zap();
clearMarked(current);
}
m_state = Zapped;
m_state = Marked;
}
} // namespace JSC
......@@ -136,7 +136,7 @@ namespace JSC {
// cell liveness data. To restore accurate cell liveness data, call one
// of these functions:
void didConsumeFreeList(); // Call this once you've allocated all the items in the free list.
void zapFreeList(const FreeList&); // Call this to undo the free list.
void canonicalizeCellLivenessData(const FreeList&);
void clearMarks();
size_t markCount();
......@@ -154,7 +154,8 @@ namespace JSC {
bool isLive(const JSCell*);
bool isLiveCell(const void*);
void setMarked(const void*);
void clearMarked(const void*);
bool needsSweeping();
#if ENABLE(GGC)
......@@ -185,13 +186,14 @@ namespace JSC {
template <int size> inline void gatherDirtyCellsWithSize(DirtyCellVector&);
#endif
template <typename Functor> void forEachCell(Functor&);
template <typename Functor> void forEachLiveCell(Functor&);
template <typename Functor> void forEachDeadCell(Functor&);
private:
static const size_t atomAlignmentMask = atomSize - 1; // atomSize must be a power of two.
enum BlockState { New, FreeListed, Allocated, Marked, Zapped };
enum BlockState { New, FreeListed, Allocated, Marked };
template<bool destructorCallNeeded> FreeList sweepHelper(SweepMode = SweepOnly);
typedef char Atom[atomSize];
......@@ -364,21 +366,18 @@ namespace JSC {
m_marks.set(atomNumber(p));
}
inline void MarkedBlock::clearMarked(const void* p)
{
ASSERT(m_marks.get(atomNumber(p)));
m_marks.clear(atomNumber(p));
}
inline bool MarkedBlock::isLive(const JSCell* cell)
{
switch (m_state) {
case Allocated:
return true;
case Zapped:
if (isZapped(cell)) {
// Object dead in previous collection, not allocated since previous collection: mark bit should not be set.
ASSERT(!m_marks.get(atomNumber(cell)));
return false;
}
// Newly allocated objects: mark bit not set.
// Objects that survived prior collection: mark bit set.
return true;
case Marked:
return m_marks.get(atomNumber(cell));
......@@ -407,6 +406,14 @@ namespace JSC {
return isLive(static_cast<const JSCell*>(p));
}
template <typename Functor> inline void MarkedBlock::forEachCell(Functor& functor)
{
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
JSCell* cell = reinterpret_cast_ptr<JSCell*>(&atoms()[i]);
functor(cell);
}
}
template <typename Functor> inline void MarkedBlock::forEachLiveCell(Functor& functor)
{
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
......
......@@ -146,17 +146,17 @@ void MarkedSpace::reapWeakSets()
void MarkedSpace::canonicalizeCellLivenessData()
{
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).zapFreeList();
destructorAllocatorFor(cellSize).zapFreeList();
allocatorFor(cellSize).canonicalizeCellLivenessData();
destructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
}
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
allocatorFor(cellSize).zapFreeList();
destructorAllocatorFor(cellSize).zapFreeList();
allocatorFor(cellSize).canonicalizeCellLivenessData();
destructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
}
m_largeAllocator.zapFreeList();
m_structureAllocator.zapFreeList();
m_largeAllocator.canonicalizeCellLivenessData();
m_structureAllocator.canonicalizeCellLivenessData();
}
bool MarkedSpace::isPagedOut(double deadline)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment