Commit 6159e5f9 authored by ggaren@apple.com's avatar ggaren@apple.com

Added large allocation support to MarkedSpace

https://bugs.webkit.org/show_bug.cgi?id=96214

Originally reviewed by Oliver Hunt, then I added a design revision by
suggested by Phil Pizlo.

I expanded the imprecise size classes to cover up to 32KB, then added
an mmap-based allocator for everything bigger. There's a lot of tuning
we could do in these size classes, but currently they're almost
completely unused, so I haven't done any tuning.

Subtle point: the large allocator is a degenerate case of our free list
logic. Its list only ever contains zero or one items.

* heap/Heap.h:
(JSC::Heap::allocateStructure): Pipe in size information.

* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): Handle the case where we
find a free item in the sweep list but the item isn't big enough. This
can happen in the large allocator because it mixes sizes.

(JSC::MarkedAllocator::tryAllocate):
(JSC::MarkedAllocator::allocateSlowCase): More piping.

(JSC::MarkedAllocator::allocateBlock): Handle the oversize case.

(JSC::MarkedAllocator::addBlock): I moved the call to didAddBlock here
because it made more sense.

* heap/MarkedAllocator.h:
(MarkedAllocator):
(JSC::MarkedAllocator::allocate):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::canonicalizeCellLivenessData):
(JSC::MarkedSpace::isPagedOut):
(JSC::MarkedSpace::freeBlock):
* heap/MarkedSpace.h:
(MarkedSpace):
(JSC::MarkedSpace::allocatorFor):
(JSC::MarkedSpace::destructorAllocatorFor):
(JSC::MarkedSpace::allocateWithoutDestructor):
(JSC::MarkedSpace::allocateWithDestructor):
(JSC::MarkedSpace::allocateStructure):
(JSC::MarkedSpace::forEachBlock):
* runtime/Structure.h:
(JSC::Structure): More piping.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@128141 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 802c0122
2012-09-10 Geoffrey Garen <ggaren@apple.com>
Added large allocation support to MarkedSpace
https://bugs.webkit.org/show_bug.cgi?id=96214
Originally reviewed by Oliver Hunt, then I added a design revision by
suggested by Phil Pizlo.
I expanded the imprecise size classes to cover up to 32KB, then added
an mmap-based allocator for everything bigger. There's a lot of tuning
we could do in these size classes, but currently they're almost
completely unused, so I haven't done any tuning.
Subtle point: the large allocator is a degenerate case of our free list
logic. Its list only ever contains zero or one items.
* heap/Heap.h:
(JSC::Heap::allocateStructure): Pipe in size information.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateHelper): Handle the case where we
find a free item in the sweep list but the item isn't big enough. This
can happen in the large allocator because it mixes sizes.
(JSC::MarkedAllocator::tryAllocate):
(JSC::MarkedAllocator::allocateSlowCase): More piping.
(JSC::MarkedAllocator::allocateBlock): Handle the oversize case.
(JSC::MarkedAllocator::addBlock): I moved the call to didAddBlock here
because it made more sense.
* heap/MarkedAllocator.h:
(MarkedAllocator):
(JSC::MarkedAllocator::allocate):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::canonicalizeCellLivenessData):
(JSC::MarkedSpace::isPagedOut):
(JSC::MarkedSpace::freeBlock):
* heap/MarkedSpace.h:
(MarkedSpace):
(JSC::MarkedSpace::allocatorFor):
(JSC::MarkedSpace::destructorAllocatorFor):
(JSC::MarkedSpace::allocateWithoutDestructor):
(JSC::MarkedSpace::allocateWithDestructor):
(JSC::MarkedSpace::allocateStructure):
(JSC::MarkedSpace::forEachBlock):
* runtime/Structure.h:
(JSC::Structure): More piping.
2012-09-10 Geoffrey Garen <ggaren@apple.com>
Try to fix the Windows (32-bit) build.
......
......@@ -63,7 +63,7 @@ EXPORTS
?addSlowCase@Identifier@JSC@@CA?AV?$PassRefPtr@VStringImpl@WTF@@@WTF@@PAVExecState@2@PAVStringImpl@4@@Z
?addSlowCase@Identifier@JSC@@CA?AV?$PassRefPtr@VStringImpl@WTF@@@WTF@@PAVJSGlobalData@2@PAVStringImpl@4@@Z
?addStaticGlobals@JSGlobalObject@JSC@@IAEXPAUGlobalPropertyInfo@12@H@Z
?allocateSlowCase@MarkedAllocator@JSC@@AAEPAXXZ
?allocateSlowCase@MarkedAllocator@JSC@@AAEPAXI@Z
?append@StringBuilder@WTF@@QAEXPBEI@Z
?append@StringBuilder@WTF@@QAEXPB_WI@Z
?appendNumber@StringBuilder@WTF@@QAEXH@Z
......
......@@ -185,7 +185,7 @@ namespace JSC {
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
void* allocateStructure();
void* allocateStructure(size_t);
static const size_t minExtraCost = 256;
static const size_t maxExtraCost = 1024 * 1024;
......@@ -372,9 +372,9 @@ namespace JSC {
return m_objectSpace.allocateWithoutDestructor(bytes);
}
inline void* Heap::allocateStructure()
inline void* Heap::allocateStructure(size_t bytes)
{
return m_objectSpace.allocateStructure();
return m_objectSpace.allocateStructure(bytes);
}
inline CheckedBoolean Heap::tryAllocateStorage(size_t bytes, void** outPtr)
......
......@@ -27,7 +27,7 @@ bool MarkedAllocator::isPagedOut(double deadline)
return false;
}
inline void* MarkedAllocator::tryAllocateHelper()
inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
{
if (!m_freeList.head) {
if (m_onlyContainsStructures && !m_heap->isSafeToSweepStructures()) {
......@@ -42,12 +42,20 @@ inline void* MarkedAllocator::tryAllocateHelper()
}
for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (m_freeList.head) {
m_currentBlock = block;
break;
MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (!freeList.head) {
block->didConsumeFreeList();
continue;
}
block->didConsumeFreeList();
if (bytes > block->cellSize()) {
block->zapFreeList(freeList);
continue;
}
m_currentBlock = block;
m_freeList = freeList;
break;
}
if (!m_freeList.head) {
......@@ -62,16 +70,16 @@ inline void* MarkedAllocator::tryAllocateHelper()
return head;
}
inline void* MarkedAllocator::tryAllocate()
inline void* MarkedAllocator::tryAllocate(size_t bytes)
{
ASSERT(!m_heap->isBusy());
m_heap->m_operationInProgress = Allocation;
void* result = tryAllocateHelper();
void* result = tryAllocateHelper(bytes);
m_heap->m_operationInProgress = NoOperation;
return result;
}
void* MarkedAllocator::allocateSlowCase()
void* MarkedAllocator::allocateSlowCase(size_t bytes)
{
ASSERT(m_heap->globalData()->apiLock().currentThreadIsHoldingLock());
#if COLLECT_ON_EVERY_ALLOCATION
......@@ -82,7 +90,7 @@ void* MarkedAllocator::allocateSlowCase()
ASSERT(!m_freeList.head);
m_heap->didAllocate(m_freeList.bytes);
void* result = tryAllocate();
void* result = tryAllocate(bytes);
if (LIKELY(result != 0))
return result;
......@@ -90,27 +98,39 @@ void* MarkedAllocator::allocateSlowCase()
if (m_heap->shouldCollect()) {
m_heap->collect(Heap::DoNotSweep);
result = tryAllocate();
result = tryAllocate(bytes);
if (result)
return result;
}
ASSERT(!m_heap->shouldCollect());
MarkedBlock* block = allocateBlock();
MarkedBlock* block = allocateBlock(bytes);
ASSERT(block);
addBlock(block);
result = tryAllocate();
result = tryAllocate(bytes);
ASSERT(result);
return result;
}
MarkedBlock* MarkedAllocator::allocateBlock()
MarkedBlock* MarkedAllocator::allocateBlock(size_t bytes)
{
MarkedBlock* block = MarkedBlock::create(m_heap->blockAllocator().allocate(), m_heap, m_cellSize, m_cellsNeedDestruction, m_onlyContainsStructures);
m_markedSpace->didAddBlock(block);
return block;
size_t minBlockSize = MarkedBlock::blockSize;
size_t minAllocationSize = WTF::roundUpToMultipleOf(WTF::pageSize(), sizeof(MarkedBlock) + bytes);
size_t blockSize = std::max(minBlockSize, minAllocationSize);
size_t cellSize = m_cellSize ? m_cellSize : WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
if (blockSize == MarkedBlock::blockSize) {
PageAllocationAligned allocation = m_heap->blockAllocator().allocate();
return MarkedBlock::create(allocation, m_heap, cellSize, m_cellsNeedDestruction, m_onlyContainsStructures);
}
PageAllocationAligned allocation = PageAllocationAligned::allocate(blockSize, MarkedBlock::blockSize, OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation))
CRASH();
return MarkedBlock::create(allocation, m_heap, cellSize, m_cellsNeedDestruction, m_onlyContainsStructures);
}
void MarkedAllocator::addBlock(MarkedBlock* block)
......@@ -121,6 +141,7 @@ void MarkedAllocator::addBlock(MarkedBlock* block)
m_blockList.append(block);
m_blocksToSweep = m_currentBlock = block;
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
m_markedSpace->didAddBlock(block);
}
void MarkedAllocator::removeBlock(MarkedBlock* block)
......
......@@ -25,7 +25,7 @@ public:
size_t cellSize() { return m_cellSize; }
bool cellsNeedDestruction() { return m_cellsNeedDestruction; }
bool onlyContainsStructures() { return m_onlyContainsStructures; }
void* allocate();
void* allocate(size_t);
Heap* heap() { return m_heap; }
template<typename Functor> void forEachBlock(Functor&);
......@@ -39,10 +39,10 @@ public:
private:
friend class LLIntOffsetsExtractor;
JS_EXPORT_PRIVATE void* allocateSlowCase();
void* tryAllocate();
void* tryAllocateHelper();
MarkedBlock* allocateBlock();
JS_EXPORT_PRIVATE void* allocateSlowCase(size_t);
void* tryAllocate(size_t);
void* tryAllocateHelper(size_t);
MarkedBlock* allocateBlock(size_t);
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
......@@ -75,12 +75,11 @@ inline void MarkedAllocator::init(Heap* heap, MarkedSpace* markedSpace, size_t c
m_onlyContainsStructures = onlyContainsStructures;
}
inline void* MarkedAllocator::allocate()
inline void* MarkedAllocator::allocate(size_t bytes)
{
MarkedBlock::FreeCell* head = m_freeList.head;
// This is a light-weight fast path to cover the most common case.
if (UNLIKELY(!head))
return allocateSlowCase();
return allocateSlowCase(bytes);
m_freeList.head = head->next;
return head;
......
......@@ -90,6 +90,7 @@ MarkedSpace::MarkedSpace(Heap* heap)
destructorAllocatorFor(cellSize).init(heap, this, cellSize, true, false);
}
m_largeAllocator.init(heap, this, 0, true, false);
m_structureAllocator.init(heap, this, WTF::roundUpToMultipleOf(32, sizeof(Structure)), true, true);
}
......@@ -127,6 +128,7 @@ void MarkedSpace::resetAllocators()
destructorAllocatorFor(cellSize).reset();
}
m_largeAllocator.reset();
m_structureAllocator.reset();
}
......@@ -153,6 +155,7 @@ void MarkedSpace::canonicalizeCellLivenessData()
destructorAllocatorFor(cellSize).zapFreeList();
}
m_largeAllocator.zapFreeList();
m_structureAllocator.zapFreeList();
}
......@@ -168,6 +171,9 @@ bool MarkedSpace::isPagedOut(double deadline)
return true;
}
if (m_largeAllocator.isPagedOut(deadline))
return true;
if (m_structureAllocator.isPagedOut(deadline))
return true;
......@@ -178,7 +184,12 @@ void MarkedSpace::freeBlock(MarkedBlock* block)
{
allocatorFor(block).removeBlock(block);
m_blocks.remove(block);
m_heap->blockAllocator().deallocate(MarkedBlock::destroy(block));
if (block->capacity() == MarkedBlock::blockSize) {
m_heap->blockAllocator().deallocate(MarkedBlock::destroy(block));
return;
}
MarkedBlock::destroy(block).deallocate();
}
void MarkedSpace::freeOrShrinkBlock(MarkedBlock* block)
......
......@@ -80,7 +80,7 @@ public:
MarkedAllocator& destructorAllocatorFor(size_t);
void* allocateWithDestructor(size_t);
void* allocateWithoutDestructor(size_t);
void* allocateStructure();
void* allocateStructure(size_t);
void resetAllocators();
......@@ -115,15 +115,15 @@ public:
private:
friend class LLIntOffsetsExtractor;
// [ 32... 256 ]
// [ 32... 512 ]
static const size_t preciseStep = MarkedBlock::atomSize;
static const size_t preciseCutoff = 256;
static const size_t preciseCutoff = 512;
static const size_t preciseCount = preciseCutoff / preciseStep;
// [ 512... 2048 ]
static const size_t impreciseStep = preciseCutoff;
static const size_t impreciseCutoff = maxCellSize;
// [ 1024... blockSize ]
static const size_t impreciseStep = 2 * preciseCutoff;
static const size_t impreciseCutoff = MarkedBlock::blockSize / 2;
static const size_t impreciseCount = impreciseCutoff / impreciseStep;
struct Subspace {
......@@ -133,6 +133,7 @@ private:
Subspace m_destructorSpace;
Subspace m_normalSpace;
MarkedAllocator m_largeAllocator;
MarkedAllocator m_structureAllocator;
Heap* m_heap;
......@@ -162,10 +163,12 @@ inline MarkedAllocator& MarkedSpace::firstAllocator()
inline MarkedAllocator& MarkedSpace::allocatorFor(size_t bytes)
{
ASSERT(bytes && bytes <= maxCellSize);
ASSERT(bytes);
if (bytes <= preciseCutoff)
return m_normalSpace.preciseAllocators[(bytes - 1) / preciseStep];
return m_normalSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
if (bytes <= impreciseCutoff)
return m_normalSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
return m_largeAllocator;
}
inline MarkedAllocator& MarkedSpace::allocatorFor(MarkedBlock* block)
......@@ -181,25 +184,27 @@ inline MarkedAllocator& MarkedSpace::allocatorFor(MarkedBlock* block)
inline MarkedAllocator& MarkedSpace::destructorAllocatorFor(size_t bytes)
{
ASSERT(bytes && bytes <= maxCellSize);
ASSERT(bytes);
if (bytes <= preciseCutoff)
return m_destructorSpace.preciseAllocators[(bytes - 1) / preciseStep];
return m_destructorSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
if (bytes <= impreciseCutoff)
return m_normalSpace.impreciseAllocators[(bytes - 1) / impreciseStep];
return m_largeAllocator;
}
inline void* MarkedSpace::allocateWithoutDestructor(size_t bytes)
{
return allocatorFor(bytes).allocate();
return allocatorFor(bytes).allocate(bytes);
}
inline void* MarkedSpace::allocateWithDestructor(size_t bytes)
{
return destructorAllocatorFor(bytes).allocate();
return destructorAllocatorFor(bytes).allocate(bytes);
}
inline void* MarkedSpace::allocateStructure()
inline void* MarkedSpace::allocateStructure(size_t bytes)
{
return m_structureAllocator.allocate();
return m_structureAllocator.allocate(bytes);
}
template <typename Functor> inline typename Functor::ReturnType MarkedSpace::forEachBlock(Functor& functor)
......@@ -214,6 +219,7 @@ template <typename Functor> inline typename Functor::ReturnType MarkedSpace::for
m_destructorSpace.impreciseAllocators[i].forEachBlock(functor);
}
m_largeAllocator.forEachBlock(functor);
m_structureAllocator.forEachBlock(functor);
return functor.returnValue();
......
......@@ -460,7 +460,7 @@ namespace JSC {
ASSERT(!heap.globalData()->isInitializingObject());
heap.globalData()->setInitializingObjectClass(&Structure::s_info);
#endif
JSCell* result = static_cast<JSCell*>(heap.allocateStructure());
JSCell* result = static_cast<JSCell*>(heap.allocateStructure(sizeof(Structure)));
result->clearStructure();
return result;
}
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment