Commit 5c018e7e authored by mhahnenberg@apple.com's avatar mhahnenberg@apple.com
Browse files

BlockAllocator should use regions as its VM allocation abstraction

https://bugs.webkit.org/show_bug.cgi?id=99107

Reviewed by Geoffrey Garen.

Currently the BlockAllocator allocates a single block at a time directly from the OS. Our block
allocations are on the large-ish side (64 KB) to amortize across many allocations the expense of
mapping new virtual memory from the OS. These large blocks are then shared between the MarkedSpace
and the CopiedSpace. This design makes it difficult to vary the size of the blocks in different
parts of the Heap while still allowing us to amortize the VM allocation costs.

We should redesign the BlockAllocator so that it has a layer of indirection between blocks that are
used by the allocator/collector and our primary unit of VM allocation from the OS. In particular,
the BlockAllocator should allocate Regions of virtual memory from the OS, which are then subdivided
into one or more Blocks to be used in our custom allocators. This design has the following nice properties:

1) We can remove the knowledge of PageAllocationAligned from HeapBlocks. Each HeapBlock will now
   only know what Region it belongs to. The Region maintains all the metadata for how to allocate
   and deallocate virtual memory from the OS.

2) We can easily allocate in larger chunks than we need to satisfy a particular request for a Block.
   We can then continue to amortize our VM allocation costs while allowing for smaller block sizes,
   which should increase locality in the mutator when allocating, lazy sweeping, etc.

3) By encapsulating the logic of where our memory comes from inside of the Region class, we can more
   easily transition over to allocating VM from a specific range of pre-reserved address space. This
   will be a necessary step along the way to 32-bit pointers.

This particular patch will not change the size of MarkedBlocks or CopiedBlocks, nor will it change how
much VM we allocate per failed Block request. It only sets up the data structures that we need to make
these changes in future patches.

Most of the changes in this patch relate to the addition of the Region class to be used by the
BlockAllocator and the threading of changes made to BlockAllocator's interface through to the call sites.

* heap/BlockAllocator.cpp: The BlockAllocator now has three lists that track the three disjoint sets of
Regions that it cares about: empty regions, partially full regions, and completely full regions.
Empty regions have no blocks currently in use and can be freed immediately if the freeing thread
determines they should be. Partial regions have some blocks used, but aren't completely in use yet.
These regions are preferred for recycling before empty regions to mitigate fragmentation within regions.
Completely full regions are no longer able to be used for allocations. Regions move between these
three lists as they are created and their constituent blocks are allocated and deallocated.
(JSC::BlockAllocator::BlockAllocator):
(JSC::BlockAllocator::~BlockAllocator):
(JSC::BlockAllocator::releaseFreeRegions):
(JSC::BlockAllocator::waitForRelativeTimeWhileHoldingLock):
(JSC::BlockAllocator::waitForRelativeTime):
(JSC::BlockAllocator::blockFreeingThreadMain):
* heap/BlockAllocator.h:
(JSC):
(DeadBlock):
(JSC::DeadBlock::DeadBlock):
(Region):
(JSC::Region::blockSize):
(JSC::Region::isFull):
(JSC::Region::isEmpty):
(JSC::Region::create): This function is responsible for doing the actual VM allocation. This should be the
only function in the entire JSC object runtime that calls out the OS for virtual memory allocation.
(JSC::Region::Region):
(JSC::Region::~Region):
(JSC::Region::allocate):
(JSC::Region::deallocate):
(BlockAllocator):
(JSC::BlockAllocator::tryAllocateFromRegion): Helper function that encapsulates checking a particular list
of regions for a free block.
(JSC::BlockAllocator::allocate):
(JSC::BlockAllocator::allocateCustomSize): This function is responsible for allocating one-off custom size
regions for use in oversize allocations in both the MarkedSpace and the CopiedSpace. These regions are not
tracked by the BlockAllocator. The only pointer to them is in the HeapBlock that is returned. These regions
contain exactly one block.
(JSC::BlockAllocator::deallocate):
(JSC::BlockAllocator::deallocateCustomSize): This function is responsible for deallocating one-off custom size
regions. The regions are deallocated back to the OS eagerly.
* heap/CopiedBlock.h: Re-worked CopiedBlocks to use Regions instead of PageAllocationAligned.
(CopiedBlock):
(JSC::CopiedBlock::createNoZeroFill):
(JSC::CopiedBlock::create):
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::payloadEnd):
(JSC::CopiedBlock::capacity):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneCopying):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::allocateBlockForCopyingPhase):
(JSC::CopiedSpace::allocateBlock):
* heap/HeapBlock.h:
(JSC::HeapBlock::destroy):
(JSC::HeapBlock::HeapBlock):
(JSC::HeapBlock::region):
(HeapBlock):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::allocateBlock):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::create):
(JSC::MarkedBlock::MarkedBlock):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::capacity):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::freeBlock):


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@131132 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent ca5e5e4e
2012-10-11 Mark Hahnenberg <mhahnenberg@apple.com>
BlockAllocator should use regions as its VM allocation abstraction
https://bugs.webkit.org/show_bug.cgi?id=99107
Reviewed by Geoffrey Garen.
Currently the BlockAllocator allocates a single block at a time directly from the OS. Our block
allocations are on the large-ish side (64 KB) to amortize across many allocations the expense of
mapping new virtual memory from the OS. These large blocks are then shared between the MarkedSpace
and the CopiedSpace. This design makes it difficult to vary the size of the blocks in different
parts of the Heap while still allowing us to amortize the VM allocation costs.
We should redesign the BlockAllocator so that it has a layer of indirection between blocks that are
used by the allocator/collector and our primary unit of VM allocation from the OS. In particular,
the BlockAllocator should allocate Regions of virtual memory from the OS, which are then subdivided
into one or more Blocks to be used in our custom allocators. This design has the following nice properties:
1) We can remove the knowledge of PageAllocationAligned from HeapBlocks. Each HeapBlock will now
only know what Region it belongs to. The Region maintains all the metadata for how to allocate
and deallocate virtual memory from the OS.
2) We can easily allocate in larger chunks than we need to satisfy a particular request for a Block.
We can then continue to amortize our VM allocation costs while allowing for smaller block sizes,
which should increase locality in the mutator when allocating, lazy sweeping, etc.
3) By encapsulating the logic of where our memory comes from inside of the Region class, we can more
easily transition over to allocating VM from a specific range of pre-reserved address space. This
will be a necessary step along the way to 32-bit pointers.
This particular patch will not change the size of MarkedBlocks or CopiedBlocks, nor will it change how
much VM we allocate per failed Block request. It only sets up the data structures that we need to make
these changes in future patches.
Most of the changes in this patch relate to the addition of the Region class to be used by the
BlockAllocator and the threading of changes made to BlockAllocator's interface through to the call sites.
* heap/BlockAllocator.cpp: The BlockAllocator now has three lists that track the three disjoint sets of
Regions that it cares about: empty regions, partially full regions, and completely full regions.
Empty regions have no blocks currently in use and can be freed immediately if the freeing thread
determines they should be. Partial regions have some blocks used, but aren't completely in use yet.
These regions are preferred for recycling before empty regions to mitigate fragmentation within regions.
Completely full regions are no longer able to be used for allocations. Regions move between these
three lists as they are created and their constituent blocks are allocated and deallocated.
(JSC::BlockAllocator::BlockAllocator):
(JSC::BlockAllocator::~BlockAllocator):
(JSC::BlockAllocator::releaseFreeRegions):
(JSC::BlockAllocator::waitForRelativeTimeWhileHoldingLock):
(JSC::BlockAllocator::waitForRelativeTime):
(JSC::BlockAllocator::blockFreeingThreadMain):
* heap/BlockAllocator.h:
(JSC):
(DeadBlock):
(JSC::DeadBlock::DeadBlock):
(Region):
(JSC::Region::blockSize):
(JSC::Region::isFull):
(JSC::Region::isEmpty):
(JSC::Region::create): This function is responsible for doing the actual VM allocation. This should be the
only function in the entire JSC object runtime that calls out the OS for virtual memory allocation.
(JSC::Region::Region):
(JSC::Region::~Region):
(JSC::Region::allocate):
(JSC::Region::deallocate):
(BlockAllocator):
(JSC::BlockAllocator::tryAllocateFromRegion): Helper function that encapsulates checking a particular list
of regions for a free block.
(JSC::BlockAllocator::allocate):
(JSC::BlockAllocator::allocateCustomSize): This function is responsible for allocating one-off custom size
regions for use in oversize allocations in both the MarkedSpace and the CopiedSpace. These regions are not
tracked by the BlockAllocator. The only pointer to them is in the HeapBlock that is returned. These regions
contain exactly one block.
(JSC::BlockAllocator::deallocate):
(JSC::BlockAllocator::deallocateCustomSize): This function is responsible for deallocating one-off custom size
regions. The regions are deallocated back to the OS eagerly.
* heap/CopiedBlock.h: Re-worked CopiedBlocks to use Regions instead of PageAllocationAligned.
(CopiedBlock):
(JSC::CopiedBlock::createNoZeroFill):
(JSC::CopiedBlock::create):
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::payloadEnd):
(JSC::CopiedBlock::capacity):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::tryAllocateOversize):
(JSC::CopiedSpace::tryReallocateOversize):
(JSC::CopiedSpace::doneCopying):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::allocateBlockForCopyingPhase):
(JSC::CopiedSpace::allocateBlock):
* heap/HeapBlock.h:
(JSC::HeapBlock::destroy):
(JSC::HeapBlock::HeapBlock):
(JSC::HeapBlock::region):
(HeapBlock):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::allocateBlock):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::create):
(JSC::MarkedBlock::MarkedBlock):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::capacity):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::freeBlock):
2012-10-11 Filip Pizlo <fpizlo@apple.com>
UInt32ToNumber and OSR exit should be aware of copy propagation and correctly recover both versions of a variable that was subject to a UInt32ToNumber cast
......
......@@ -31,46 +31,46 @@
namespace JSC {
BlockAllocator::BlockAllocator()
: m_numberOfFreeBlocks(0)
: m_numberOfEmptyRegions(0)
, m_numberOfPartialRegions(0)
, m_isCurrentlyAllocating(false)
, m_blockFreeingThreadShouldQuit(false)
, m_blockFreeingThread(createThread(blockFreeingThreadStartFunc, this, "JavaScriptCore::BlockFree"))
{
ASSERT(m_blockFreeingThread);
m_freeBlockLock.Init();
m_regionLock.Init();
}
BlockAllocator::~BlockAllocator()
{
releaseFreeBlocks();
releaseFreeRegions();
{
MutexLocker locker(m_freeBlockConditionLock);
MutexLocker locker(m_emptyRegionConditionLock);
m_blockFreeingThreadShouldQuit = true;
m_freeBlockCondition.broadcast();
m_emptyRegionCondition.broadcast();
}
waitForThreadCompletion(m_blockFreeingThread);
}
void BlockAllocator::releaseFreeBlocks()
void BlockAllocator::releaseFreeRegions()
{
while (true) {
DeadBlock* block;
Region* region;
{
SpinLockHolder locker(&m_freeBlockLock);
if (!m_numberOfFreeBlocks)
block = 0;
SpinLockHolder locker(&m_regionLock);
if (!m_numberOfEmptyRegions)
region = 0;
else {
block = m_freeBlocks.removeHead();
ASSERT(block);
m_numberOfFreeBlocks--;
region = m_emptyRegions.removeHead();
ASSERT(region);
m_numberOfEmptyRegions--;
}
}
if (!block)
if (!region)
break;
DeadBlock::destroy(block).deallocate();
delete region;
}
}
......@@ -79,7 +79,7 @@ void BlockAllocator::waitForRelativeTimeWhileHoldingLock(double relative)
if (m_blockFreeingThreadShouldQuit)
return;
m_freeBlockCondition.timedWait(m_freeBlockConditionLock, currentTime() + relative);
m_emptyRegionCondition.timedWait(m_emptyRegionConditionLock, currentTime() + relative);
}
void BlockAllocator::waitForRelativeTime(double relative)
......@@ -88,7 +88,7 @@ void BlockAllocator::waitForRelativeTime(double relative)
// frequently. It would only be a bug if this function failed to return
// when it was asked to do so.
MutexLocker locker(m_freeBlockConditionLock);
MutexLocker locker(m_emptyRegionConditionLock);
waitForRelativeTimeWhileHoldingLock(relative);
}
......@@ -114,40 +114,40 @@ void BlockAllocator::blockFreeingThreadMain()
// Now process the list of free blocks. Keep freeing until half of the
// blocks that are currently on the list are gone. Assume that a size_t
// field can be accessed atomically.
size_t currentNumberOfFreeBlocks = m_numberOfFreeBlocks;
if (!currentNumberOfFreeBlocks)
size_t currentNumberOfEmptyRegions = m_numberOfEmptyRegions;
if (!currentNumberOfEmptyRegions)
continue;
size_t desiredNumberOfFreeBlocks = currentNumberOfFreeBlocks / 2;
size_t desiredNumberOfEmptyRegions = currentNumberOfEmptyRegions / 2;
while (!m_blockFreeingThreadShouldQuit) {
DeadBlock* block;
Region* region;
{
SpinLockHolder locker(&m_freeBlockLock);
if (m_numberOfFreeBlocks <= desiredNumberOfFreeBlocks)
block = 0;
SpinLockHolder locker(&m_regionLock);
if (m_numberOfEmptyRegions <= desiredNumberOfEmptyRegions)
region = 0;
else {
block = m_freeBlocks.removeHead();
ASSERT(block);
m_numberOfFreeBlocks--;
region = m_emptyRegions.removeHead();
ASSERT(region);
m_numberOfEmptyRegions--;
}
}
if (!block)
if (!region)
break;
DeadBlock::destroy(block).deallocate();
delete region;
}
// Sleep until there is actually work to do rather than waking up every second to check.
MutexLocker locker(m_freeBlockConditionLock);
m_freeBlockLock.Lock();
while (!m_numberOfFreeBlocks && !m_blockFreeingThreadShouldQuit) {
m_freeBlockLock.Unlock();
m_freeBlockCondition.wait(m_freeBlockConditionLock);
m_freeBlockLock.Lock();
MutexLocker locker(m_emptyRegionConditionLock);
m_regionLock.Lock();
while (!m_numberOfEmptyRegions && !m_blockFreeingThreadShouldQuit) {
m_regionLock.Unlock();
m_emptyRegionCondition.wait(m_emptyRegionConditionLock);
m_regionLock.Lock();
}
m_freeBlockLock.Unlock();
m_regionLock.Unlock();
}
}
......
......@@ -35,25 +35,94 @@
namespace JSC {
class CopiedBlock;
class MarkedBlock;
class Region;
// Simple allocator to reduce VM cost by holding onto blocks of memory for
// short periods of time and then freeing them on a secondary thread.
class DeadBlock : public HeapBlock<DeadBlock> {
public:
static DeadBlock* create(const PageAllocationAligned&);
DeadBlock(Region*);
};
inline DeadBlock::DeadBlock(Region* region)
: HeapBlock<DeadBlock>(region)
{
}
class Region : public DoublyLinkedListNode<Region> {
friend CLASS_IF_GCC DoublyLinkedListNode<Region>;
public:
~Region();
static Region* create(size_t blockSize, size_t blockAlignment, size_t numberOfBlocks);
size_t blockSize() const { return m_blockSize; }
bool isFull() const { return m_blocksInUse == m_totalBlocks; }
bool isEmpty() const { return !m_blocksInUse; }
DeadBlock* allocate();
void deallocate(void*);
private:
DeadBlock(const PageAllocationAligned&);
Region(PageAllocationAligned&, size_t blockSize, size_t totalBlocks);
PageAllocationAligned m_allocation;
size_t m_totalBlocks;
size_t m_blocksInUse;
size_t m_blockSize;
Region* m_prev;
Region* m_next;
DoublyLinkedList<DeadBlock> m_deadBlocks;
};
inline DeadBlock::DeadBlock(const PageAllocationAligned& allocation)
: HeapBlock<DeadBlock>(allocation)
inline Region* Region::create(size_t blockSize, size_t blockAlignment, size_t numberOfBlocks)
{
size_t regionSize = blockSize * numberOfBlocks;
PageAllocationAligned allocation = PageAllocationAligned::allocate(regionSize, blockAlignment, OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation))
CRASH();
return new Region(allocation, blockSize, numberOfBlocks);
}
inline DeadBlock* DeadBlock::create(const PageAllocationAligned& allocation)
inline Region::Region(PageAllocationAligned& allocation, size_t blockSize, size_t totalBlocks)
: DoublyLinkedListNode<Region>()
, m_allocation(allocation)
, m_totalBlocks(totalBlocks)
, m_blocksInUse(0)
, m_blockSize(blockSize)
, m_prev(0)
, m_next(0)
{
return new(NotNull, allocation.base()) DeadBlock(allocation);
ASSERT(allocation);
char* start = static_cast<char*>(allocation.base());
char* end = start + allocation.size();
for (char* current = start; current < end; current += blockSize)
m_deadBlocks.append(new (NotNull, current) DeadBlock(this));
}
inline Region::~Region()
{
ASSERT(!m_blocksInUse);
m_allocation.deallocate();
}
inline DeadBlock* Region::allocate()
{
ASSERT(!isFull());
m_blocksInUse++;
return m_deadBlocks.removeHead();
}
inline void Region::deallocate(void* base)
{
ASSERT(base);
ASSERT(m_blocksInUse);
ASSERT(base >= m_allocation.base() && base < static_cast<char*>(m_allocation.base()) + m_allocation.size());
DeadBlock* block = new (NotNull, base) DeadBlock(this);
m_deadBlocks.push(block);
m_blocksInUse--;
}
class BlockAllocator {
......@@ -61,62 +130,117 @@ public:
BlockAllocator();
~BlockAllocator();
PageAllocationAligned allocate();
void deallocate(PageAllocationAligned);
template <typename T> DeadBlock* allocate();
DeadBlock* allocateCustomSize(size_t blockSize, size_t blockAlignment);
template <typename T> void deallocate(T*);
template <typename T> void deallocateCustomSize(T*);
private:
DeadBlock* tryAllocateFromRegion(DoublyLinkedList<Region>&, size_t&);
void waitForRelativeTimeWhileHoldingLock(double relative);
void waitForRelativeTime(double relative);
void blockFreeingThreadMain();
static void blockFreeingThreadStartFunc(void* heap);
void releaseFreeBlocks();
void releaseFreeRegions();
DoublyLinkedList<DeadBlock> m_freeBlocks;
size_t m_numberOfFreeBlocks;
DoublyLinkedList<Region> m_fullRegions;
DoublyLinkedList<Region> m_partialRegions;
DoublyLinkedList<Region> m_emptyRegions;
size_t m_numberOfEmptyRegions;
size_t m_numberOfPartialRegions;
bool m_isCurrentlyAllocating;
bool m_blockFreeingThreadShouldQuit;
SpinLock m_freeBlockLock;
Mutex m_freeBlockConditionLock;
ThreadCondition m_freeBlockCondition;
SpinLock m_regionLock;
Mutex m_emptyRegionConditionLock;
ThreadCondition m_emptyRegionCondition;
ThreadIdentifier m_blockFreeingThread;
};
inline PageAllocationAligned BlockAllocator::allocate()
inline DeadBlock* BlockAllocator::tryAllocateFromRegion(DoublyLinkedList<Region>& regions, size_t& numberOfRegions)
{
{
SpinLockHolder locker(&m_freeBlockLock);
m_isCurrentlyAllocating = true;
if (m_numberOfFreeBlocks) {
ASSERT(!m_freeBlocks.isEmpty());
m_numberOfFreeBlocks--;
return DeadBlock::destroy(m_freeBlocks.removeHead());
if (numberOfRegions) {
ASSERT(!regions.isEmpty());
Region* region = regions.head();
ASSERT(!region->isFull());
DeadBlock* block = region->allocate();
if (region->isFull()) {
numberOfRegions--;
m_fullRegions.push(regions.removeHead());
}
return block;
}
return 0;
}
ASSERT(m_freeBlocks.isEmpty());
PageAllocationAligned allocation = PageAllocationAligned::allocate(DeadBlock::s_blockSize, DeadBlock::s_blockSize, OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation))
CRASH();
return allocation;
template<typename T>
inline DeadBlock* BlockAllocator::allocate()
{
DeadBlock* block;
m_isCurrentlyAllocating = true;
{
SpinLockHolder locker(&m_regionLock);
if ((block = tryAllocateFromRegion(m_partialRegions, m_numberOfPartialRegions)))
return block;
if ((block = tryAllocateFromRegion(m_emptyRegions, m_numberOfEmptyRegions)))
return block;
}
Region* newRegion = Region::create(T::blockSize, T::blockSize, 1);
SpinLockHolder locker(&m_regionLock);
m_emptyRegions.push(newRegion);
m_numberOfEmptyRegions++;
block = tryAllocateFromRegion(m_emptyRegions, m_numberOfEmptyRegions);
ASSERT(block);
return block;
}
inline DeadBlock* BlockAllocator::allocateCustomSize(size_t blockSize, size_t blockAlignment)
{
size_t realSize = WTF::roundUpToMultipleOf(blockAlignment, blockSize);
Region* newRegion = Region::create(realSize, blockAlignment, 1);
DeadBlock* block = newRegion->allocate();
ASSERT(block);
return block;
}
inline void BlockAllocator::deallocate(PageAllocationAligned allocation)
template<typename T>
inline void BlockAllocator::deallocate(T* block)
{
size_t numberOfFreeBlocks;
bool shouldWakeBlockFreeingThread = false;
{
SpinLockHolder locker(&m_freeBlockLock);
m_freeBlocks.push(DeadBlock::create(allocation));
numberOfFreeBlocks = m_numberOfFreeBlocks++;
SpinLockHolder locker(&m_regionLock);
Region* region = block->region();
if (region->isFull())
m_fullRegions.remove(region);
region->deallocate(block);
if (region->isEmpty()) {
m_emptyRegions.push(region);
shouldWakeBlockFreeingThread = !m_numberOfEmptyRegions;
m_numberOfEmptyRegions++;
} else {
m_partialRegions.push(region);
m_numberOfPartialRegions++;
}
}
if (!numberOfFreeBlocks) {
MutexLocker mutexLocker(m_freeBlockConditionLock);
m_freeBlockCondition.signal();
if (shouldWakeBlockFreeingThread) {
MutexLocker mutexLocker(m_emptyRegionConditionLock);
m_emptyRegionCondition.signal();
}
}
template<typename T>
inline void BlockAllocator::deallocateCustomSize(T* block)
{
Region* region = block->region();
region->deallocate(block);
delete region;
}
} // namespace JSC
#endif // BlockAllocator_h
......@@ -26,6 +26,7 @@
#ifndef CopiedBlock_h
#define CopiedBlock_h
#include "BlockAllocator.h"
#include "HeapBlock.h"
#include "JSValue.h"
#include "JSValueInlineMethods.h"
......@@ -38,8 +39,8 @@ class CopiedBlock : public HeapBlock<CopiedBlock> {
friend class CopiedSpace;
friend class CopiedAllocator;
public:
static CopiedBlock* create(const PageAllocationAligned&);
static CopiedBlock* createNoZeroFill(const PageAllocationAligned&);
static CopiedBlock* create(DeadBlock*);
static CopiedBlock* createNoZeroFill(DeadBlock*);
// The payload is the region of the block that is usable for allocations.
char* payload();
......@@ -60,24 +61,27 @@ public:
size_t size();
size_t capacity();
static const size_t blockSize = 64 * KB;
private:
CopiedBlock(const PageAllocationAligned&);
CopiedBlock(Region*);
void zeroFillWilderness(); // Can be called at any time to zero-fill to the end of the block.
size_t m_remaining;
uintptr_t m_isPinned;
};
inline CopiedBlock* CopiedBlock::createNoZeroFill(const PageAllocationAligned& allocation)
inline CopiedBlock* CopiedBlock::createNoZeroFill(DeadBlock* block)
{
return new(NotNull, allocation.base()) CopiedBlock(allocation);
Region* region = block->region();
return new(NotNull, block) CopiedBlock(region);
}
inline CopiedBlock* CopiedBlock::create(const PageAllocationAligned& allocation)
inline CopiedBlock* CopiedBlock::create(DeadBlock* block)
{
CopiedBlock* block = createNoZeroFill(allocation);
block->zeroFillWilderness();
return block;
CopiedBlock* newBlock = createNoZeroFill(block);
newBlock->zeroFillWilderness();
return newBlock;
}
inline void CopiedBlock::zeroFillWilderness()
......@@ -92,8 +96,8 @@ inline void CopiedBlock::zeroFillWilderness()
#endif
}
inline CopiedBlock::CopiedBlock(const PageAllocationAligned& allocation)
: HeapBlock<CopiedBlock>(allocation)
inline CopiedBlock::CopiedBlock(Region* region)
: HeapBlock<CopiedBlock>(region)
, m_remaining(payloadCapacity())
, m_isPinned(false)
{
......@@ -107,7 +111,7 @@ inline char* CopiedBlock::payload()
inline char* CopiedBlock::payloadEnd()
{
return reinterpret_cast<char*>(this) + allocation().size();
return reinterpret_cast<char*>(this) + region()->blockSize();
}
inline size_t CopiedBlock::payloadCapacity()
......@@ -152,7 +156,7 @@ inline size_t CopiedBlock::size()
inline size_t CopiedBlock::capacity()
{
return allocation().size();
return region()->blockSize();
}
} // namespace JSC
......
......@@ -50,7 +50,7 @@ CopiedSpace::~CopiedSpace()
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_fromSpace->removeHead()));
while (!m_oversizeBlocks.isEmpty())
CopiedBlock::destroy(m_oversizeBlocks.removeHead()).deallocate();
m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(m_oversizeBlocks.removeHead()));
}
void CopiedSpace::init()
......@@ -79,15 +79,7 @@ CheckedBoolean CopiedSpace::tryAllocateOversize(size_t bytes, void** outPtr)
{
ASSERT(isOversize(bytes));
size_t blockSize = WTF::roundUpToMultipleOf(WTF::pageSize(), sizeof(CopiedBlock) + bytes);
PageAllocationAligned allocation = PageAllocationAligned::allocate(blockSize, WTF::pageSize(), OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation)) {
*outPtr = 0;
return false;
}