?

Log in

No account? Create an account
Relocating memory allocator - brad's life — LiveJournal [entries|archive|friends|userinfo]
Brad Fitzpatrick

[ website | bradfitz.com ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Relocating memory allocator [May. 26th, 2003|01:53 pm]
Brad Fitzpatrick
So, it turns out the memcached takes up no CPU and makes the site blazing fast (even though we're only caching a few object types so far!).

Unfortunately, it did what we suspected: died hard when its address space got fragmented. (In the meantime I restarted the daemons with much smaller sizes, accounting for the bloat which accumulates over a few hours....)

I jotted out notes for an arena allocation scheme where we only malloc large, fixed-size pages and handle our own placement and and recompaction within those pages. However, I can't believe this hasn't been done before.

I want a memory allocator that lets you register a hook when it wants to move allocations around to defragment, so you can update pointers and whatnot. Anybody know of one?
LinkReply

Comments:
[User Picture]From: brad
2003-05-28 11:39 am (UTC)
I found the paper and was reading it, but it seems it deals with allocating only like-sized objects. Our allocations are all over the place.
(Reply) (Parent) (Thread)
From: ex_ff928
2003-05-28 12:36 pm (UTC)
(I'm using the same API as described in the Bonwick paper for a coherent frame of reference.)

You can resolve the fixed-size issue by creating multiple object caches in a power-of-two size schema, and then populating cache_table[] below. Then wrap the slab allocator with a rudimentary layer. Round your allocation to the closest matching bucket by indexing into it based on the size of the object you're requesting.

So something like:

#define MAXOBJSIZE 8192 /* Maximum size you want to create an object cache for */
#define ALIGN_SHIFT 4 /* 2^4 = 16 = minimum guaranteed alignment */

void *slabmem_alloc(int size, int flags)
{
void *obj;
int idx = (int)(size - 1) >> ALIGN_SHIFT;

if ((unsigned int)idx < MAXOBJSIZE >> ALIGN_SHIFT) {
kmem_cache_t *cp = cache_table[idx];
obj = kmem_cache_alloc(cp, flags);
return (obj);
} else {
/* Fall through to non-slab allocation here */
}
}

Basically you treat each slab cache as a backing store bucket in a power-of-two (or similar) allocator. It gives you O(1) allocations and frees. The only thing you have to watch for is internal fragmentation, which is not an issue if you pick your cache_table entries properly. This is also greatly mitigated by the slab allocator's slab management strategy, and the fact there are no O(n) coalescings performed as in most other allocators (e.g. Doug Lea's malloc, which while pretty cool, is not O(1) for this reason.)

Also, frees need the size of the object to be passed in as a parameter.

Make sense?
(Reply) (Parent) (Thread)
[User Picture]From: brad
2003-05-28 01:14 pm (UTC)
Whoa, yes... this all makes a lot of sense.

I still don't understand why coalescings needs to be O(n). If you keep a mapping of free regions' start and end points, whenever you free a new region, you could check to see if the freed region has free regions on either side of it and combine them. With something like Judy (judy.sf.net), that mapping is pretty cheap, I'd think. Definitely better than O(n).

In any case, powers-of-two slab allocation seems like the best bet here.

Thanks!
(Reply) (Parent) (Thread)
[User Picture]From: taral
2003-05-28 05:59 pm (UTC)
Coalescing can be made O(1) very easily. And slab allocators don't have to be power-of-two. In fact, they're usually not used for that. Slab allocators are used when you have a large number of objects that are _not_ power-of-two sized. The slab allocator is substantially better at reducing fragmentation and minimizing memory usage.
(Reply) (Parent) (Thread)