Commit bee96a38 authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com

MarkedBlocks shouldn't be put in Allocated state if they didn't produce a FreeList

https://bugs.webkit.org/show_bug.cgi?id=121236

Reviewed by Geoffrey Garen.

Right now, after a collection all MarkedBlocks are in the Marked block state. When lazy sweeping 
happens, if a block returns an empty free list after being swept, we call didConsumeFreeList(), 
which moves the block into the Allocated block state. This happens to both the block that was 
just being allocated out of (i.e. m_currentBlock) as well as any blocks who are completely full. 
We should distinguish between these two cases: m_currentBlock should transition to 
Allocated (because we were just allocating out of it) and any subsequent block that returns an 
empty free list should transition back to the Marked state. This will make the block state more 
consistent with the actual state the block is in, and it will also allow us to speed up moving 
all blocks the the Marked state during generational collection.

Added new RAII-style HeapIterationScope class that notifies the Heap when it is about to be 
iterated and when iteration has finished. Any clients that need accurate liveness data when 
iterating over the Heap now need to use a HeapIterationScope so that the state of Heap can 
be properly restored after they are done iterating. No new GC-allocated objects can be created 
until this object goes out of scope.

* JavaScriptCore.xcodeproj/project.pbxproj:
* debugger/Debugger.cpp: 
(JSC::Debugger::recompileAllJSFunctions): Added HeapIterationScope for the Recompiler iteration.
* heap/Heap.cpp:
(JSC::Heap::willStartIterating): Callback used by HeapIterationScope to indicate that iteration of 
the Heap is about to begin. This will cause cell liveness data to be canonicalized by calling stopAllocating.
(JSC::Heap::didFinishIterating): Same, but indicates that iteration has finished.
(JSC::Heap::globalObjectCount): Used HeapIterationScope.
(JSC::Heap::objectTypeCounts): Ditto.
(JSC::Heap::markDeadObjects): Ditto.
(JSC::Heap::zombifyDeadObjects): Ditto.
* heap/Heap.h:
* heap/HeapIterationScope.h: Added. New RAII-style object for indicating to the Heap that it's about
to be iterated or that iteration has finished.
(JSC::HeapIterationScope::HeapIterationScope):
(JSC::HeapIterationScope::~HeapIterationScope):
* heap/HeapStatistics.cpp:
(JSC::HeapStatistics::showObjectStatistics): Used new HeapIterationScope.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): We now treat the case where we have just finished 
allocating out of the current block differently from the case where we sweep a block and it 
returns an empty free list. This was the primary point of this patch.
(JSC::MarkedAllocator::allocateSlowCase): ASSERT that nobody is currently iterating the Heap 
when allocating.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::reset): All allocators are reset after every collection. We need to make 
sure that the m_lastActiveBlock gets cleared, which it might not always because we don't call 
takeCanonicalizedBlock on blocks in the large allocators.
(JSC::MarkedAllocator::stopAllocating): We shouldn't already have a last active block,
so ASSERT as much.
(JSC::MarkedAllocator::resumeAllocating): Do the opposite of what stopAllocating
does. So, if we don't have a m_lastActiveBlock then we don't have to worry about undoing anything
done by stopAllocating. If we do, then we call resumeAllocating on the block, which returns the FreeList
as it was prior to stopping allocation. We then set the current block to the last active block and 
clear the last active block. 
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::resumeAllocating): Any block resuming allocation should be in 
the Marked state, so ASSERT as much. We always allocate a m_newlyAllocated Bitmap if we're
FreeListed, so if we didn't allocate one then we know we were Marked when allocation was stopped,
so just return early with an empty FreeList. If we do have a non-null m_newlyAllocated Bitmap
then we need to be swept in order to rebuild our FreeList.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::didConsumeEmptyFreeList): This is called if we ever sweep a block and get back
an empty free list. Instead of transitioning to the Allocated state, we now go straight back to the 
Marked state. This makes sense because we weren't actually allocated out of, so we shouldn't be in 
the allocated state. Also added some ASSERTs to make sure that we're in the state that we expect: all of
our mark bits should be set and we should not have a m_newlyAllocated Bitmap.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::forEachAllocator): Added a new functor-style iteration method so that we can 
easily iterate over each allocator for, e.g., stopping and resuming allocators without
duplicating code. 
(JSC::StopAllocatingFunctor::operator()): New functors for use with forEachAllocator.
(JSC::MarkedSpace::stopAllocating): Ditto.
(JSC::ResumeAllocatingFunctor::operator()): Ditto.
(JSC::MarkedSpace::resumeAllocating): Ditto.
(JSC::MarkedSpace::willStartIterating): Callback that notifies MarkedSpace that it is being iterated.
Does some ASSERTs, sets a flag, canonicalizes cell liveness data by calling stopAllocating.
(JSC::MarkedSpace::didFinishIterating): Ditto, but to signal that iteration has completed.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::iterationInProgress): Returns true if a HeapIterationScope is currently active.
(JSC::MarkedSpace::forEachLiveCell): Accepts a HeapIterationScope to enforce the rule that you have to 
create one prior to iterating over the Heap.
(JSC::MarkedSpace::forEachDeadCell): Ditto.
* runtime/JSGlobalObject.cpp:
(JSC::JSGlobalObject::haveABadTime): Changed to use new HeapIterationScope.
* runtime/VM.cpp:
(JSC::VM::releaseExecutableMemory): Ditto.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@155891 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 27bcb581
2013-09-13 Mark Hahnenberg <mhahnenberg@apple.com>
MarkedBlocks shouldn't be put in Allocated state if they didn't produce a FreeList
https://bugs.webkit.org/show_bug.cgi?id=121236
Reviewed by Geoffrey Garen.
Right now, after a collection all MarkedBlocks are in the Marked block state. When lazy sweeping
happens, if a block returns an empty free list after being swept, we call didConsumeFreeList(),
which moves the block into the Allocated block state. This happens to both the block that was
just being allocated out of (i.e. m_currentBlock) as well as any blocks who are completely full.
We should distinguish between these two cases: m_currentBlock should transition to
Allocated (because we were just allocating out of it) and any subsequent block that returns an
empty free list should transition back to the Marked state. This will make the block state more
consistent with the actual state the block is in, and it will also allow us to speed up moving
all blocks the the Marked state during generational collection.
Added new RAII-style HeapIterationScope class that notifies the Heap when it is about to be
iterated and when iteration has finished. Any clients that need accurate liveness data when
iterating over the Heap now need to use a HeapIterationScope so that the state of Heap can
be properly restored after they are done iterating. No new GC-allocated objects can be created
until this object goes out of scope.
* JavaScriptCore.xcodeproj/project.pbxproj:
* debugger/Debugger.cpp:
(JSC::Debugger::recompileAllJSFunctions): Added HeapIterationScope for the Recompiler iteration.
* heap/Heap.cpp:
(JSC::Heap::willStartIterating): Callback used by HeapIterationScope to indicate that iteration of
the Heap is about to begin. This will cause cell liveness data to be canonicalized by calling stopAllocating.
(JSC::Heap::didFinishIterating): Same, but indicates that iteration has finished.
(JSC::Heap::globalObjectCount): Used HeapIterationScope.
(JSC::Heap::objectTypeCounts): Ditto.
(JSC::Heap::markDeadObjects): Ditto.
(JSC::Heap::zombifyDeadObjects): Ditto.
* heap/Heap.h:
* heap/HeapIterationScope.h: Added. New RAII-style object for indicating to the Heap that it's about
to be iterated or that iteration has finished.
(JSC::HeapIterationScope::HeapIterationScope):
(JSC::HeapIterationScope::~HeapIterationScope):
* heap/HeapStatistics.cpp:
(JSC::HeapStatistics::showObjectStatistics): Used new HeapIterationScope.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): We now treat the case where we have just finished
allocating out of the current block differently from the case where we sweep a block and it
returns an empty free list. This was the primary point of this patch.
(JSC::MarkedAllocator::allocateSlowCase): ASSERT that nobody is currently iterating the Heap
when allocating.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::reset): All allocators are reset after every collection. We need to make
sure that the m_lastActiveBlock gets cleared, which it might not always because we don't call
takeCanonicalizedBlock on blocks in the large allocators.
(JSC::MarkedAllocator::stopAllocating): We shouldn't already have a last active block,
so ASSERT as much.
(JSC::MarkedAllocator::resumeAllocating): Do the opposite of what stopAllocating
does. So, if we don't have a m_lastActiveBlock then we don't have to worry about undoing anything
done by stopAllocating. If we do, then we call resumeAllocating on the block, which returns the FreeList
as it was prior to stopping allocation. We then set the current block to the last active block and
clear the last active block.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::resumeAllocating): Any block resuming allocation should be in
the Marked state, so ASSERT as much. We always allocate a m_newlyAllocated Bitmap if we're
FreeListed, so if we didn't allocate one then we know we were Marked when allocation was stopped,
so just return early with an empty FreeList. If we do have a non-null m_newlyAllocated Bitmap
then we need to be swept in order to rebuild our FreeList.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::didConsumeEmptyFreeList): This is called if we ever sweep a block and get back
an empty free list. Instead of transitioning to the Allocated state, we now go straight back to the
Marked state. This makes sense because we weren't actually allocated out of, so we shouldn't be in
the allocated state. Also added some ASSERTs to make sure that we're in the state that we expect: all of
our mark bits should be set and we should not have a m_newlyAllocated Bitmap.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::forEachAllocator): Added a new functor-style iteration method so that we can
easily iterate over each allocator for, e.g., stopping and resuming allocators without
duplicating code.
(JSC::StopAllocatingFunctor::operator()): New functors for use with forEachAllocator.
(JSC::MarkedSpace::stopAllocating): Ditto.
(JSC::ResumeAllocatingFunctor::operator()): Ditto.
(JSC::MarkedSpace::resumeAllocating): Ditto.
(JSC::MarkedSpace::willStartIterating): Callback that notifies MarkedSpace that it is being iterated.
Does some ASSERTs, sets a flag, canonicalizes cell liveness data by calling stopAllocating.
(JSC::MarkedSpace::didFinishIterating): Ditto, but to signal that iteration has completed.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::iterationInProgress): Returns true if a HeapIterationScope is currently active.
(JSC::MarkedSpace::forEachLiveCell): Accepts a HeapIterationScope to enforce the rule that you have to
create one prior to iterating over the Heap.
(JSC::MarkedSpace::forEachDeadCell): Ditto.
* runtime/JSGlobalObject.cpp:
(JSC::JSGlobalObject::haveABadTime): Changed to use new HeapIterationScope.
* runtime/VM.cpp:
(JSC::VM::releaseExecutableMemory): Ditto.
2013-09-16 Filip Pizlo <fpizlo@apple.com>
Inlining should work in debug mode (i.e. Executable::newCodeBlock() should call recordParse())
......
......@@ -649,6 +649,7 @@
2600B5A6152BAAA70091EE5F /* JSStringJoiner.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 2600B5A4152BAAA70091EE5F /* JSStringJoiner.cpp */; };
2600B5A7152BAAA70091EE5F /* JSStringJoiner.h in Headers */ = {isa = PBXBuildFile; fileRef = 2600B5A5152BAAA70091EE5F /* JSStringJoiner.h */; };
2A48D1911772365B00C65A5F /* APICallbackFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = C211B574176A224D000E2A23 /* APICallbackFunction.h */; };
2AD8932B17E3868F00668276 /* HeapIterationScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 2AD8932917E3868F00668276 /* HeapIterationScope.h */; };
371D842D17C98B6E00ECF994 /* libz.dylib in Frameworks */ = {isa = PBXBuildFile; fileRef = 371D842C17C98B6E00ECF994 /* libz.dylib */; };
41359CF30FDD89AD00206180 /* DateConversion.h in Headers */ = {isa = PBXBuildFile; fileRef = D21202290AD4310C00ED79B6 /* DateConversion.h */; };
4443AE3316E188D90076F110 /* Foundation.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 51F0EB6105C86C6B00E6DF1B /* Foundation.framework */; };
......@@ -1812,6 +1813,7 @@
1CAA8B4B0D32C39A0041BCFF /* JavaScriptCore.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JavaScriptCore.h; sourceTree = "<group>"; };
2600B5A4152BAAA70091EE5F /* JSStringJoiner.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSStringJoiner.cpp; sourceTree = "<group>"; };
2600B5A5152BAAA70091EE5F /* JSStringJoiner.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSStringJoiner.h; sourceTree = "<group>"; };
2AD8932917E3868F00668276 /* HeapIterationScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapIterationScope.h; sourceTree = "<group>"; };
371D842C17C98B6E00ECF994 /* libz.dylib */ = {isa = PBXFileReference; lastKnownFileType = "compiled.mach-o.dylib"; name = libz.dylib; path = usr/lib/libz.dylib; sourceTree = SDKROOT; };
449097EE0F8F81B50076A327 /* FeatureDefines.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = FeatureDefines.xcconfig; sourceTree = "<group>"; };
451539B812DC994500EF7AC4 /* Yarr.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Yarr.h; path = yarr/Yarr.h; sourceTree = "<group>"; };
......@@ -2850,6 +2852,7 @@
14150132154BB13F005D8C98 /* WeakSetInlines.h */,
0FC8150814043BCA00CFA603 /* WriteBarrierSupport.cpp */,
0FC8150914043BD200CFA603 /* WriteBarrierSupport.h */,
2AD8932917E3868F00668276 /* HeapIterationScope.h */,
);
path = heap;
sourceTree = "<group>";
......@@ -4028,6 +4031,7 @@
86EC9DD11328DF82002B2AD7 /* DFGRegisterBank.h in Headers */,
0F766D4415B2A3C0008F363E /* DFGRegisterSet.h in Headers */,
86BB09C1138E381B0056702F /* DFGRepatch.h in Headers */,
2AD8932B17E3868F00668276 /* HeapIterationScope.h in Headers */,
A77A424317A0BBFD00A8DB81 /* DFGSafeToExecute.h in Headers */,
A741017F179DAF80002EB8BA /* DFGSaneStringGetByValSlowPathGenerator.h in Headers */,
86ECA3FA132DF25A002B2AD7 /* DFGScoreBoard.h in Headers */,
......
......@@ -23,6 +23,7 @@
#include "Debugger.h"
#include "Error.h"
#include "HeapIterationScope.h"
#include "Interpreter.h"
#include "JSFunction.h"
#include "JSGlobalObject.h"
......@@ -122,7 +123,8 @@ void Debugger::recompileAllJSFunctions(VM* vm)
vm->prepareToDiscardCode();
Recompiler recompiler(this);
vm->heap.objectSpace().forEachLiveCell(recompiler);
HeapIterationScope iterationScope(vm->heap);
vm->heap.objectSpace().forEachLiveCell(iterationScope, recompiler);
}
JSValue evaluateInGlobalCallFrame(const String& script, JSValue& exception, JSGlobalObject* globalObject)
......
......@@ -29,6 +29,7 @@
#include "DFGWorklist.h"
#include "GCActivityCallback.h"
#include "GCIncomingRefCountedSetInlines.h"
#include "HeapIterationScope.h"
#include "HeapRootVisitor.h"
#include "HeapStatistics.h"
#include "IncrementalSweeper.h"
......@@ -415,9 +416,14 @@ inline JSStack& Heap::stack()
return m_vm->interpreter->stack();
}
void Heap::canonicalizeCellLivenessData()
void Heap::willStartIterating()
{
m_objectSpace.canonicalizeCellLivenessData();
m_objectSpace.willStartIterating();
}
void Heap::didFinishIterating()
{
m_objectSpace.didFinishIterating();
}
void Heap::getConservativeRegisterRoots(HashSet<JSCell*>& roots)
......@@ -662,7 +668,8 @@ size_t Heap::protectedGlobalObjectCount()
size_t Heap::globalObjectCount()
{
return m_objectSpace.forEachLiveCell<CountIfGlobalObject>();
HeapIterationScope iterationScope(*this);
return m_objectSpace.forEachLiveCell<CountIfGlobalObject>(iterationScope);
}
size_t Heap::protectedObjectCount()
......@@ -677,7 +684,8 @@ PassOwnPtr<TypeCountSet> Heap::protectedObjectTypeCounts()
PassOwnPtr<TypeCountSet> Heap::objectTypeCounts()
{
return m_objectSpace.forEachLiveCell<RecordType>();
HeapIterationScope iterationScope(*this);
return m_objectSpace.forEachLiveCell<RecordType>(iterationScope);
}
void Heap::deleteAllCompiledCode()
......@@ -763,8 +771,8 @@ void Heap::collect(SweepToggle sweepToggle)
}
{
GCPHASE(Canonicalize);
m_objectSpace.canonicalizeCellLivenessData();
GCPHASE(StopAllocation);
m_objectSpace.stopAllocating();
}
markRoots();
......@@ -875,7 +883,8 @@ bool Heap::collectIfNecessaryOrDefer()
void Heap::markDeadObjects()
{
m_objectSpace.forEachDeadCell<MarkObject>();
HeapIterationScope iterationScope(*this);
m_objectSpace.forEachDeadCell<MarkObject>(iterationScope);
}
void Heap::setActivityCallback(PassOwnPtr<GCActivityCallback> activityCallback)
......@@ -954,7 +963,8 @@ void Heap::zombifyDeadObjects()
{
// Sweep now because destructors will crash once we're zombified.
m_objectSpace.sweep();
m_objectSpace.forEachDeadCell<Zombify>();
HeapIterationScope iterationScope(*this);
m_objectSpace.forEachDeadCell<Zombify>(iterationScope);
}
void Heap::incrementDeferralDepth()
......
......@@ -166,7 +166,8 @@ namespace JSC {
HandleSet* handleSet() { return &m_handleSet; }
HandleStack* handleStack() { return &m_handleStack; }
void canonicalizeCellLivenessData();
void willStartIterating();
void didFinishIterating();
void getConservativeRegisterRoots(HashSet<JSCell*>& roots);
double lastGCLength() { return m_lastGCLength; }
......
/*
* Copyright (C) 2013 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
* BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
* THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef HeapIterationScope_h
#define HeapIterationScope_h
#include "Heap.h"
#include <wtf/Noncopyable.h>
namespace JSC {
class HeapIterationScope {
WTF_MAKE_NONCOPYABLE(HeapIterationScope);
public:
HeapIterationScope(Heap&);
~HeapIterationScope();
private:
Heap& m_heap;
};
inline HeapIterationScope::HeapIterationScope(Heap& heap)
: m_heap(heap)
{
m_heap.willStartIterating();
}
inline HeapIterationScope::~HeapIterationScope()
{
m_heap.didFinishIterating();
}
} // namespace JSC
#endif // HeapIterationScope_h
......@@ -27,6 +27,7 @@
#include "HeapStatistics.h"
#include "Heap.h"
#include "HeapIterationScope.h"
#include "JSObject.h"
#include "Operations.h"
#include "Options.h"
......@@ -235,7 +236,10 @@ void HeapStatistics::showObjectStatistics(Heap* heap)
dataLogF("pause time: %lfms\n\n", heap->m_lastGCLength);
StorageStatistics storageStatistics;
heap->m_objectSpace.forEachLiveCell(storageStatistics);
{
HeapIterationScope iterationScope(*heap);
heap->m_objectSpace.forEachLiveCell(iterationScope, storageStatistics);
}
dataLogF("wasted .property storage: %ldkB (%ld%%)\n",
static_cast<long>(
(storageStatistics.storageCapacity() - storageStatistics.storageSize()) / KB),
......
......@@ -30,15 +30,21 @@ bool MarkedAllocator::isPagedOut(double deadline)
inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
{
if (!m_freeList.head) {
if (m_currentBlock) {
ASSERT(m_currentBlock == m_blocksToSweep);
m_currentBlock->didConsumeFreeList();
m_blocksToSweep = m_currentBlock->next();
}
for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (!freeList.head) {
block->didConsumeFreeList();
block->didConsumeEmptyFreeList();
continue;
}
if (bytes > block->cellSize()) {
block->canonicalizeCellLivenessData(freeList);
block->stopAllocating(freeList);
continue;
}
......@@ -76,6 +82,7 @@ void* MarkedAllocator::allocateSlowCase(size_t bytes)
ASSERT(m_heap->m_operationInProgress == NoOperation);
#endif
ASSERT(!m_markedSpace->isIterating());
ASSERT(!m_freeList.head);
m_heap->didAllocate(m_freeList.bytes);
......
......@@ -22,15 +22,16 @@ public:
MarkedAllocator();
void reset();
void canonicalizeCellLivenessData();
void stopAllocating();
void resumeAllocating();
size_t cellSize() { return m_cellSize; }
MarkedBlock::DestructorType destructorType() { return m_destructorType; }
void* allocate(size_t);
Heap* heap() { return m_heap; }
MarkedBlock* takeCanonicalizedBlock()
MarkedBlock* takeLastActiveBlock()
{
MarkedBlock* block = m_canonicalizedBlock;
m_canonicalizedBlock = 0;
MarkedBlock* block = m_lastActiveBlock;
m_lastActiveBlock = 0;
return block;
}
......@@ -50,7 +51,7 @@ private:
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
MarkedBlock* m_canonicalizedBlock;
MarkedBlock* m_lastActiveBlock;
MarkedBlock* m_blocksToSweep;
DoublyLinkedList<MarkedBlock> m_blockList;
size_t m_cellSize;
......@@ -66,7 +67,7 @@ inline ptrdiff_t MarkedAllocator::offsetOfFreeListHead()
inline MarkedAllocator::MarkedAllocator()
: m_currentBlock(0)
, m_canonicalizedBlock(0)
, m_lastActiveBlock(0)
, m_blocksToSweep(0)
, m_cellSize(0)
, m_destructorType(MarkedBlock::None)
......@@ -103,24 +104,36 @@ inline void* MarkedAllocator::allocate(size_t bytes)
inline void MarkedAllocator::reset()
{
m_lastActiveBlock = 0;
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
m_blocksToSweep = m_blockList.head();
}
inline void MarkedAllocator::canonicalizeCellLivenessData()
inline void MarkedAllocator::stopAllocating()
{
ASSERT(!m_lastActiveBlock);
if (!m_currentBlock) {
ASSERT(!m_freeList.head);
return;
}
m_currentBlock->canonicalizeCellLivenessData(m_freeList);
m_canonicalizedBlock = m_currentBlock;
m_currentBlock->stopAllocating(m_freeList);
m_lastActiveBlock = m_currentBlock;
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
}
inline void MarkedAllocator::resumeAllocating()
{
if (!m_lastActiveBlock)
return;
m_freeList = m_lastActiveBlock->resumeAllocating();
m_currentBlock = m_lastActiveBlock;
m_lastActiveBlock = 0;
}
template <typename Functor> inline void MarkedAllocator::forEachBlock(Functor& functor)
{
MarkedBlock* next;
......
......@@ -161,7 +161,7 @@ private:
MarkedBlock* m_block;
};
void MarkedBlock::canonicalizeCellLivenessData(const FreeList& freeList)
void MarkedBlock::stopAllocating(const FreeList& freeList)
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
FreeCell* head = freeList.head;
......@@ -199,4 +199,20 @@ void MarkedBlock::canonicalizeCellLivenessData(const FreeList& freeList)
m_state = Marked;
}
MarkedBlock::FreeList MarkedBlock::resumeAllocating()
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
ASSERT(m_state == Marked);
if (!m_newlyAllocated) {
// We didn't have to create a "newly allocated" bitmap. That means we were already Marked
// when we last stopped allocation, so return an empty free list and stay in the Marked state.
return FreeList();
}
// Re-create our free list from before stopping allocation.
return sweep(SweepToFreeList);
}
} // namespace JSC
......@@ -132,7 +132,9 @@ namespace JSC {
// cell liveness data. To restore accurate cell liveness data, call one
// of these functions:
void didConsumeFreeList(); // Call this once you've allocated all the items in the free list.
void canonicalizeCellLivenessData(const FreeList&);
void stopAllocating(const FreeList&);
FreeList resumeAllocating(); // Call this if you canonicalized a block for some non-collection related purpose.
void didConsumeEmptyFreeList(); // Call this if you sweep a block, but the returned FreeList is empty.
// Returns true if the "newly allocated" bitmap was non-null
// and was successfully cleared and false otherwise.
......@@ -277,6 +279,19 @@ namespace JSC {
m_state = Allocated;
}
inline void MarkedBlock::didConsumeEmptyFreeList()
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
ASSERT(!m_newlyAllocated);
#ifndef NDEBUG
for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell)
ASSERT(m_marks.get(i));
#endif
ASSERT(m_state == FreeListed);
m_state = Marked;
}
inline void MarkedBlock::clearMarks()
{
HEAP_LOG_BLOCK_STATE_TRANSITION(this);
......
......@@ -80,6 +80,7 @@ struct ReapWeakSet : MarkedBlock::VoidFunctor {
MarkedSpace::MarkedSpace(Heap* heap)
: m_heap(heap)
, m_capacity(0)
, m_isIterating(false)
{
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).init(heap, this, cellSize, MarkedBlock::None);
......@@ -110,7 +111,7 @@ struct LastChanceToFinalize : MarkedBlock::VoidFunctor {
void MarkedSpace::lastChanceToFinalize()
{
canonicalizeCellLivenessData();
stopAllocating();
forEachBlock<LastChanceToFinalize>();
}
......@@ -150,23 +151,51 @@ void MarkedSpace::reapWeakSets()
forEachBlock<ReapWeakSet>();
}
void MarkedSpace::canonicalizeCellLivenessData()
template <typename Functor>
void MarkedSpace::forEachAllocator()
{
Functor functor;
forEachAllocator(functor);
}
template <typename Functor>
void MarkedSpace::forEachAllocator(Functor& functor)
{
for (size_t cellSize = preciseStep; cellSize <= preciseCutoff; cellSize += preciseStep) {
allocatorFor(cellSize).canonicalizeCellLivenessData();
normalDestructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
immortalStructureDestructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
functor(allocatorFor(cellSize));
functor(normalDestructorAllocatorFor(cellSize));
functor(immortalStructureDestructorAllocatorFor(cellSize));
}
for (size_t cellSize = impreciseStep; cellSize <= impreciseCutoff; cellSize += impreciseStep) {
allocatorFor(cellSize).canonicalizeCellLivenessData();
normalDestructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
immortalStructureDestructorAllocatorFor(cellSize).canonicalizeCellLivenessData();
functor(allocatorFor(cellSize));
functor(normalDestructorAllocatorFor(cellSize));
functor(immortalStructureDestructorAllocatorFor(cellSize));
}
m_normalSpace.largeAllocator.canonicalizeCellLivenessData();
m_normalDestructorSpace.largeAllocator.canonicalizeCellLivenessData();
m_immortalStructureDestructorSpace.largeAllocator.canonicalizeCellLivenessData();
functor(m_normalSpace.largeAllocator);
functor(m_normalDestructorSpace.largeAllocator);
functor(m_immortalStructureDestructorSpace.largeAllocator);
}
struct StopAllocatingFunctor {
void operator()(MarkedAllocator& allocator) { allocator.stopAllocating(); }
};
void MarkedSpace::stopAllocating()
{
ASSERT(!isIterating());
forEachAllocator<StopAllocatingFunctor>();
}
struct ResumeAllocatingFunctor {
void operator()(MarkedAllocator& allocator) { allocator.resumeAllocating(); }
};
void MarkedSpace::resumeAllocating()
{
ASSERT(isIterating());
forEachAllocator<ResumeAllocatingFunctor>();
}
bool MarkedSpace::isPagedOut(double deadline)
......@@ -245,15 +274,15 @@ struct VerifyNewlyAllocated : MarkedBlock::VoidFunctor {
void MarkedSpace::clearNewlyAllocated()
{
for (size_t i = 0; i < preciseCount; ++i) {
clearNewlyAllocatedInBlock(m_normalSpace.preciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_normalDestructorSpace.preciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_immortalStructureDestructorSpace.preciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_normalSpace.preciseAllocators[i].takeLastActiveBlock());
clearNewlyAllocatedInBlock(m_normalDestructorSpace.preciseAllocators[i].takeLastActiveBlock());
clearNewlyAllocatedInBlock(m_immortalStructureDestructorSpace.preciseAllocators[i].takeLastActiveBlock());
}
for (size_t i = 0; i < impreciseCount; ++i) {
clearNewlyAllocatedInBlock(m_normalSpace.impreciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_normalDestructorSpace.impreciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_immortalStructureDestructorSpace.impreciseAllocators[i].takeCanonicalizedBlock());
clearNewlyAllocatedInBlock(m_normalSpace.impreciseAllocators[i].takeLastActiveBlock());
clearNewlyAllocatedInBlock(m_normalDestructorSpace.impreciseAllocators[i].takeLastActiveBlock());
clearNewlyAllocatedInBlock(m_immortalStructureDestructorSpace.impreciseAllocators[i].takeLastActiveBlock());
}
// We have to iterate all of the blocks in the large allocators because they are
......@@ -270,4 +299,18 @@ void MarkedSpace::clearNewlyAllocated()
#endif
}
void MarkedSpace::willStartIterating()
{
ASSERT(!isIterating());
stopAllocating();
m_isIterating = true;
}
void MarkedSpace::didFinishIterating()
{
ASSERT(isIterating());
resumeAllocating();
m_isIterating = false;
}
} // namespace JSC
......@@ -37,6 +37,7 @@
namespace JSC {
class Heap;
class HeapIterationScope;
class JSCell;
class LiveObjectIterator;
class LLIntOffsetsExtractor;
......@@ -80,15 +81,20 @@ public:
void reapWeakSets();
MarkedBlockSet& blocks() { return m_blocks; }
void canonicalizeCellLivenessData();
void willStartIterating();
bool isIterating() { return m_isIterating; }
void didFinishIterating();
void stopAllocating();
void resumeAllocating(); // If we just stopped allocation but we didn't do a collection, we need to resume allocation.
typedef HashSet<MarkedBlock*>::iterator BlockIterator;
template<typename Functor> typename Functor::ReturnType forEachLiveCell(Functor&);
template<typename Functor> typename Functor::ReturnType forEachLiveCell();
template<typename Functor> typename Functor::ReturnType forEachDeadCell(Functor&);
template<typename Functor> typename Functor::ReturnType forEachDeadCell();
template<typename Functor> typename Functor::ReturnType forEachLiveCell(HeapIterationScope&, Functor&);
template<typename Functor> typename Functor::ReturnType forEachLiveCell(HeapIterationScope&);
template<typename Functor> typename Functor::ReturnType forEachDeadCell(HeapIterationScope&, Functor&);
template<typename Functor> typename Functor::ReturnType forEachDeadCell(HeapIterationScope&);
template<typename Functor> typename Functor::ReturnType forEachBlock(Functor&);