Commit dd7793a8 authored by ggaren@apple.com's avatar ggaren@apple.com

Removed some public data and casting from the Heap

https://bugs.webkit.org/show_bug.cgi?id=92777

Reviewed by Oliver Hunt.

* heap/BlockAllocator.cpp:
(JSC::BlockAllocator::releaseFreeBlocks):
(JSC::BlockAllocator::blockFreeingThreadMain): Use the DeadBlock class
since HeapBlock is a template, and not a class, now. Call destroy()
instead of monkeying around with DeadBlock's internal data because
encapsulation is good.

* heap/BlockAllocator.h:
(DeadBlock): Added a class to represent a dead block, since HeapBlock is
a template now, and can't be instantiated directly.

(JSC::DeadBlock::DeadBlock):
(JSC::DeadBlock::create):
(BlockAllocator):
(JSC::BlockAllocator::allocate):
(JSC::BlockAllocator::deallocate): Use the DeadBlock class because
encapsulation is good.

* heap/CopiedBlock.h:
(CopiedBlock::destroy): No need for a destroy() function, since we
inherit one now.

(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::payloadEnd):
(JSC::CopiedBlock::capacity): Updated for some encapsulation inside
HeapBlock.

* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::size):
(JSC::CopiedSpace::capacity):
(JSC::isBlockListPagedOut): Removed a bunch of casting. This is no longer
necessary, now that our list and its nodes have the right type.

* heap/CopiedSpace.h: Use the right type in our data structures because
it improves clarity.

* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::startedCopying): Use swap to avoid duplicating it.

* heap/HeapBlock.h:
(HeapBlock): Made this a class template so we can return the right type
in linked list operations. Made our data private because encapsulation
is good.

(JSC::HeapBlock::destroy): Since we know our type, we can also eliminate
duplicate destroy() functions in our subclasses.

(JSC::HeapBlock::allocation): Added an accessor so we can hide our data.
By using const, this accessor prevents clients from accidentally deleting
our allocation.

* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::removeBlock): Removed a bunch of casting. This is
no longer necessary, now that our list and its nodes have the right type.

* heap/MarkedAllocator.h:
(MarkedAllocator):
(JSC::MarkedAllocator::reset):
(JSC::MarkedAllocator::forEachBlock): Use the right type, do less casting.

* heap/MarkedBlock.cpp: 
(JSC::MarkedBlock::destroy): Removed this function because our parent
class provides it for us now.

(JSC::MarkedBlock::MarkedBlock):
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::capacity): Updated for encapsulation.


git-svn-id: http://svn.webkit.org/repository/webkit/trunk@124250 268f45cc-cd09-0410-ab3c-d52691b4dbfc
parent 54f798b3
2012-07-31 Geoffrey Garen <ggaren@apple.com>
Removed some public data and casting from the Heap
https://bugs.webkit.org/show_bug.cgi?id=92777
Reviewed by Oliver Hunt.
* heap/BlockAllocator.cpp:
(JSC::BlockAllocator::releaseFreeBlocks):
(JSC::BlockAllocator::blockFreeingThreadMain): Use the DeadBlock class
since HeapBlock is a template, and not a class, now. Call destroy()
instead of monkeying around with DeadBlock's internal data because
encapsulation is good.
* heap/BlockAllocator.h:
(DeadBlock): Added a class to represent a dead block, since HeapBlock is
a template now, and can't be instantiated directly.
(JSC::DeadBlock::DeadBlock):
(JSC::DeadBlock::create):
(BlockAllocator):
(JSC::BlockAllocator::allocate):
(JSC::BlockAllocator::deallocate): Use the DeadBlock class because
encapsulation is good.
* heap/CopiedBlock.h:
(CopiedBlock::destroy): No need for a destroy() function, since we
inherit one now.
(JSC::CopiedBlock::CopiedBlock):
(JSC::CopiedBlock::payloadEnd):
(JSC::CopiedBlock::capacity): Updated for some encapsulation inside
HeapBlock.
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::~CopiedSpace):
(JSC::CopiedSpace::doneCopying):
(JSC::CopiedSpace::size):
(JSC::CopiedSpace::capacity):
(JSC::isBlockListPagedOut): Removed a bunch of casting. This is no longer
necessary, now that our list and its nodes have the right type.
* heap/CopiedSpace.h: Use the right type in our data structures because
it improves clarity.
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::startedCopying): Use swap to avoid duplicating it.
* heap/HeapBlock.h:
(HeapBlock): Made this a class template so we can return the right type
in linked list operations. Made our data private because encapsulation
is good.
(JSC::HeapBlock::destroy): Since we know our type, we can also eliminate
duplicate destroy() functions in our subclasses.
(JSC::HeapBlock::allocation): Added an accessor so we can hide our data.
By using const, this accessor prevents clients from accidentally deleting
our allocation.
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::removeBlock): Removed a bunch of casting. This is
no longer necessary, now that our list and its nodes have the right type.
* heap/MarkedAllocator.h:
(MarkedAllocator):
(JSC::MarkedAllocator::reset):
(JSC::MarkedAllocator::forEachBlock): Use the right type, do less casting.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::destroy): Removed this function because our parent
class provides it for us now.
(JSC::MarkedBlock::MarkedBlock):
* heap/MarkedBlock.h:
(MarkedBlock):
(JSC::MarkedBlock::capacity): Updated for encapsulation.
2012-07-31 Filip Pizlo <fpizlo@apple.com>
DFG OSR exit profiling has unusual oversights
......
......@@ -55,7 +55,7 @@ BlockAllocator::~BlockAllocator()
void BlockAllocator::releaseFreeBlocks()
{
while (true) {
HeapBlock* block;
DeadBlock* block;
{
SpinLockHolder locker(&m_freeBlockLock);
if (!m_numberOfFreeBlocks)
......@@ -70,7 +70,7 @@ void BlockAllocator::releaseFreeBlocks()
if (!block)
break;
block->m_allocation.deallocate();
DeadBlock::destroy(block).deallocate();
}
}
......@@ -121,7 +121,7 @@ void BlockAllocator::blockFreeingThreadMain()
size_t desiredNumberOfFreeBlocks = currentNumberOfFreeBlocks / 2;
while (!m_blockFreeingThreadShouldQuit) {
HeapBlock* block;
DeadBlock* block;
{
SpinLockHolder locker(&m_freeBlockLock);
if (m_numberOfFreeBlocks <= desiredNumberOfFreeBlocks)
......@@ -136,7 +136,7 @@ void BlockAllocator::blockFreeingThreadMain()
if (!block)
break;
block->m_allocation.deallocate();
DeadBlock::destroy(block).deallocate();
}
}
}
......
......@@ -38,6 +38,24 @@ namespace JSC {
// Simple allocator to reduce VM cost by holding onto blocks of memory for
// short periods of time and then freeing them on a secondary thread.
class DeadBlock : public HeapBlock<DeadBlock> {
public:
static DeadBlock* create(const PageAllocationAligned&);
private:
DeadBlock(const PageAllocationAligned&);
};
inline DeadBlock::DeadBlock(const PageAllocationAligned& allocation)
: HeapBlock<DeadBlock>(allocation)
{
}
inline DeadBlock* DeadBlock::create(const PageAllocationAligned& allocation)
{
return new(NotNull, allocation.base()) DeadBlock(allocation);
}
class BlockAllocator {
public:
BlockAllocator();
......@@ -55,7 +73,7 @@ private:
void releaseFreeBlocks();
DoublyLinkedList<HeapBlock> m_freeBlocks;
DoublyLinkedList<DeadBlock> m_freeBlocks;
size_t m_numberOfFreeBlocks;
bool m_isCurrentlyAllocating;
bool m_blockFreeingThreadShouldQuit;
......@@ -73,12 +91,12 @@ inline PageAllocationAligned BlockAllocator::allocate()
if (m_numberOfFreeBlocks) {
ASSERT(!m_freeBlocks.isEmpty());
m_numberOfFreeBlocks--;
return m_freeBlocks.removeHead()->m_allocation;
return DeadBlock::destroy(m_freeBlocks.removeHead());
}
}
ASSERT(m_freeBlocks.isEmpty());
PageAllocationAligned allocation = PageAllocationAligned::allocate(HeapBlock::s_blockSize, HeapBlock::s_blockSize, OSAllocator::JSGCHeapPages);
PageAllocationAligned allocation = PageAllocationAligned::allocate(DeadBlock::s_blockSize, DeadBlock::s_blockSize, OSAllocator::JSGCHeapPages);
if (!static_cast<bool>(allocation))
CRASH();
return allocation;
......@@ -87,8 +105,7 @@ inline PageAllocationAligned BlockAllocator::allocate()
inline void BlockAllocator::deallocate(PageAllocationAligned allocation)
{
SpinLockHolder locker(&m_freeBlockLock);
HeapBlock* heapBlock = new(NotNull, allocation.base()) HeapBlock(allocation);
m_freeBlocks.push(heapBlock);
m_freeBlocks.push(DeadBlock::create(allocation));
m_numberOfFreeBlocks++;
}
......
......@@ -34,13 +34,12 @@ namespace JSC {
class CopiedSpace;
class CopiedBlock : public HeapBlock {
class CopiedBlock : public HeapBlock<CopiedBlock> {
friend class CopiedSpace;
friend class CopiedAllocator;
public:
static CopiedBlock* create(const PageAllocationAligned&);
static CopiedBlock* createNoZeroFill(const PageAllocationAligned&);
static PageAllocationAligned destroy(CopiedBlock*);
// The payload is the region of the block that is usable for allocations.
char* payload();
......@@ -93,17 +92,8 @@ inline void CopiedBlock::zeroFillWilderness()
#endif
}
inline PageAllocationAligned CopiedBlock::destroy(CopiedBlock* block)
{
PageAllocationAligned allocation;
swap(allocation, block->m_allocation);
block->~CopiedBlock();
return allocation;
}
inline CopiedBlock::CopiedBlock(const PageAllocationAligned& allocation)
: HeapBlock(allocation)
: HeapBlock<CopiedBlock>(allocation)
, m_remaining(payloadCapacity())
, m_isPinned(false)
{
......@@ -117,7 +107,7 @@ inline char* CopiedBlock::payload()
inline char* CopiedBlock::payloadEnd()
{
return reinterpret_cast<char*>(this) + m_allocation.size();
return reinterpret_cast<char*>(this) + allocation().size();
}
inline size_t CopiedBlock::payloadCapacity()
......@@ -162,7 +152,7 @@ inline size_t CopiedBlock::size()
inline size_t CopiedBlock::capacity()
{
return m_allocation.size();
return allocation().size();
}
} // namespace JSC
......
......@@ -44,13 +44,13 @@ CopiedSpace::CopiedSpace(Heap* heap)
CopiedSpace::~CopiedSpace()
{
while (!m_toSpace->isEmpty())
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(static_cast<CopiedBlock*>(m_toSpace->removeHead())));
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_toSpace->removeHead()));
while (!m_fromSpace->isEmpty())
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(static_cast<CopiedBlock*>(m_fromSpace->removeHead())));
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(m_fromSpace->removeHead()));
while (!m_oversizeBlocks.isEmpty())
CopiedBlock::destroy(static_cast<CopiedBlock*>(m_oversizeBlocks.removeHead())).deallocate();
CopiedBlock::destroy(m_oversizeBlocks.removeHead()).deallocate();
}
void CopiedSpace::init()
......@@ -193,7 +193,7 @@ void CopiedSpace::doneCopying()
ASSERT(m_inCopyingPhase);
m_inCopyingPhase = false;
while (!m_fromSpace->isEmpty()) {
CopiedBlock* block = static_cast<CopiedBlock*>(m_fromSpace->removeHead());
CopiedBlock* block = m_fromSpace->removeHead();
if (block->m_isPinned) {
block->m_isPinned = false;
// We don't add the block to the blockSet because it was never removed.
......@@ -207,9 +207,9 @@ void CopiedSpace::doneCopying()
m_heap->blockAllocator().deallocate(CopiedBlock::destroy(block));
}
CopiedBlock* curr = static_cast<CopiedBlock*>(m_oversizeBlocks.head());
CopiedBlock* curr = m_oversizeBlocks.head();
while (curr) {
CopiedBlock* next = static_cast<CopiedBlock*>(curr->next());
CopiedBlock* next = curr->next();
if (!curr->m_isPinned) {
m_oversizeBlocks.remove(curr);
m_blockSet.remove(curr);
......@@ -224,20 +224,20 @@ void CopiedSpace::doneCopying()
if (!m_toSpace->head())
allocateBlock();
else
m_allocator.setCurrentBlock(static_cast<CopiedBlock*>(m_toSpace->head()));
m_allocator.setCurrentBlock(m_toSpace->head());
}
size_t CopiedSpace::size()
{
size_t calculatedSize = 0;
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_toSpace->head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
calculatedSize += block->size();
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_fromSpace->head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_fromSpace->head(); block; block = block->next())
calculatedSize += block->size();
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_oversizeBlocks.head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
calculatedSize += block->size();
return calculatedSize;
......@@ -247,22 +247,22 @@ size_t CopiedSpace::capacity()
{
size_t calculatedCapacity = 0;
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_toSpace->head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
calculatedCapacity += block->capacity();
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_fromSpace->head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_fromSpace->head(); block; block = block->next())
calculatedCapacity += block->capacity();
for (CopiedBlock* block = static_cast<CopiedBlock*>(m_oversizeBlocks.head()); block; block = static_cast<CopiedBlock*>(block->next()))
for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
calculatedCapacity += block->capacity();
return calculatedCapacity;
}
static bool isBlockListPagedOut(double deadline, DoublyLinkedList<HeapBlock>* list)
static bool isBlockListPagedOut(double deadline, DoublyLinkedList<CopiedBlock>* list)
{
unsigned itersSinceLastTimeCheck = 0;
HeapBlock* current = list->head();
CopiedBlock* current = list->head();
while (current) {
current = current->next();
++itersSinceLastTimeCheck;
......
......@@ -44,7 +44,6 @@ namespace JSC {
class Heap;
class CopiedBlock;
class HeapBlock;
class CopiedSpace {
friend class SlotVisitor;
......@@ -101,12 +100,12 @@ private:
SpinLock m_toSpaceLock;
DoublyLinkedList<HeapBlock>* m_toSpace;
DoublyLinkedList<HeapBlock>* m_fromSpace;
DoublyLinkedList<CopiedBlock>* m_toSpace;
DoublyLinkedList<CopiedBlock>* m_fromSpace;
DoublyLinkedList<HeapBlock> m_blocks1;
DoublyLinkedList<HeapBlock> m_blocks2;
DoublyLinkedList<HeapBlock> m_oversizeBlocks;
DoublyLinkedList<CopiedBlock> m_blocks1;
DoublyLinkedList<CopiedBlock> m_blocks2;
DoublyLinkedList<CopiedBlock> m_oversizeBlocks;
bool m_inCopyingPhase;
......@@ -116,7 +115,7 @@ private:
static const size_t s_maxAllocationSize = 32 * KB;
static const size_t s_initialBlockNum = 16;
static const size_t s_blockMask = ~(HeapBlock::s_blockSize - 1);
static const size_t s_blockMask = ~(CopiedBlock::s_blockSize - 1);
};
} // namespace JSC
......
......@@ -95,9 +95,7 @@ inline void CopiedSpace::pinIfNecessary(void* opaquePointer)
inline void CopiedSpace::startedCopying()
{
DoublyLinkedList<HeapBlock>* temp = m_fromSpace;
m_fromSpace = m_toSpace;
m_toSpace = temp;
std::swap(m_fromSpace, m_toSpace);
m_blockFilter.reset();
m_allocator.resetCurrentBlock();
......
......@@ -34,22 +34,36 @@ namespace JSC {
enum AllocationEffort { AllocationCanFail, AllocationMustSucceed };
class HeapBlock : public DoublyLinkedListNode<HeapBlock> {
template<typename T>
class HeapBlock : public DoublyLinkedListNode<T> {
friend class DoublyLinkedListNode<T>;
public:
static const size_t s_blockSize = 64 * KB;
static PageAllocationAligned destroy(HeapBlock* block)
{
static_cast<T*>(block)->~T();
PageAllocationAligned allocation;
std::swap(allocation, block->m_allocation);
return allocation;
}
HeapBlock(const PageAllocationAligned& allocation)
: DoublyLinkedListNode<HeapBlock>()
: DoublyLinkedListNode<T>()
, m_allocation(allocation)
, m_prev(0)
, m_next(0)
, m_allocation(allocation)
{
ASSERT(m_allocation);
}
HeapBlock* m_prev;
HeapBlock* m_next;
const PageAllocationAligned allocation() const { return m_allocation; }
private:
PageAllocationAligned m_allocation;
static const size_t s_blockSize = 64 * KB;
T* m_prev;
T* m_next;
};
} // namespace JSC
......
......@@ -11,7 +11,7 @@ namespace JSC {
bool MarkedAllocator::isPagedOut(double deadline)
{
unsigned itersSinceLastTimeCheck = 0;
HeapBlock* block = m_blockList.head();
MarkedBlock* block = m_blockList.head();
while (block) {
block = block->next();
++itersSinceLastTimeCheck;
......@@ -29,7 +29,7 @@ bool MarkedAllocator::isPagedOut(double deadline)
inline void* MarkedAllocator::tryAllocateHelper()
{
if (!m_freeList.head) {
for (MarkedBlock*& block = m_blocksToSweep; block; block = static_cast<MarkedBlock*>(block->next())) {
for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
if (m_freeList.head) {
m_currentBlock = block;
......@@ -115,11 +115,11 @@ void MarkedAllocator::addBlock(MarkedBlock* block)
void MarkedAllocator::removeBlock(MarkedBlock* block)
{
if (m_currentBlock == block) {
m_currentBlock = static_cast<MarkedBlock*>(m_currentBlock->next());
m_currentBlock = m_currentBlock->next();
m_freeList = MarkedBlock::FreeList();
}
if (m_blocksToSweep == block)
m_blocksToSweep = static_cast<MarkedBlock*>(m_blocksToSweep->next());
m_blocksToSweep = m_blocksToSweep->next();
m_blockList.remove(block);
}
......
......@@ -47,7 +47,7 @@ private:
MarkedBlock::FreeList m_freeList;
MarkedBlock* m_currentBlock;
MarkedBlock* m_blocksToSweep;
DoublyLinkedList<HeapBlock> m_blockList;
DoublyLinkedList<MarkedBlock> m_blockList;
size_t m_cellSize;
bool m_cellsNeedDestruction;
bool m_onlyContainsStructures;
......@@ -90,7 +90,7 @@ inline void MarkedAllocator::reset()
{
m_currentBlock = 0;
m_freeList = MarkedBlock::FreeList();
m_blocksToSweep = static_cast<MarkedBlock*>(m_blockList.head());
m_blocksToSweep = m_blockList.head();
}
inline void MarkedAllocator::zapFreeList()
......@@ -106,10 +106,10 @@ inline void MarkedAllocator::zapFreeList()
template <typename Functor> inline void MarkedAllocator::forEachBlock(Functor& functor)
{
HeapBlock* next;
for (HeapBlock* block = m_blockList.head(); block; block = next) {
MarkedBlock* next;
for (MarkedBlock* block = m_blockList.head(); block; block = next) {
next = block->next();
functor(static_cast<MarkedBlock*>(block));
functor(block);
}
}
......
......@@ -37,17 +37,8 @@ MarkedBlock* MarkedBlock::create(const PageAllocationAligned& allocation, Heap*
return new (NotNull, allocation.base()) MarkedBlock(allocation, heap, cellSize, cellsNeedDestruction, onlyContainsStructures);
}
PageAllocationAligned MarkedBlock::destroy(MarkedBlock* block)
{
PageAllocationAligned allocation;
swap(allocation, block->m_allocation);
block->~MarkedBlock();
return allocation;
}
MarkedBlock::MarkedBlock(const PageAllocationAligned& allocation, Heap* heap, size_t cellSize, bool cellsNeedDestruction, bool onlyContainsStructures)
: HeapBlock(allocation)
: HeapBlock<MarkedBlock>(allocation)
, m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
, m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
, m_cellsNeedDestruction(cellsNeedDestruction)
......
......@@ -67,8 +67,7 @@ namespace JSC {
// size is equal to the difference between the cell size and the object
// size.
class MarkedBlock : public HeapBlock {
friend class WTF::DoublyLinkedListNode<MarkedBlock>;
class MarkedBlock : public HeapBlock<MarkedBlock> {
public:
// Ensure natural alignment for native types whilst recognizing that the smallest
// object the heap will commonly allocate is four words.
......@@ -114,7 +113,6 @@ namespace JSC {
};
static MarkedBlock* create(const PageAllocationAligned&, Heap*, size_t cellSize, bool cellsNeedDestruction, bool onlyContainsStructures);
static PageAllocationAligned destroy(MarkedBlock*);
static bool isAtomAligned(const void*);
static MarkedBlock* blockFor(const void*);
......@@ -336,7 +334,7 @@ namespace JSC {
inline size_t MarkedBlock::capacity()
{
return m_allocation.size();
return allocation().size();
}
inline size_t MarkedBlock::atomNumber(const void* p)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment