AboutBlogContact
System EngineeringApril 22, 2007 2 min read 45

Memcached: Slab Allocation Internals (2007)

AunimedaAunimeda
📋 Table of Contents

Memcached: Slab Allocation Internals

If you're using malloc and free for every small object in a high-traffic cache, you're going to fragment your heap into oblivion. Memcached avoids this by implementing its own memory manager: the Slab Allocator.

The Slab Concept

Memcached pre-allocates large chunks of memory (usually 1MB) called "Slab Pages." These pages are then sliced into smaller "chunks" of fixed sizes.

# View slab statistics
$ memcached-tool 127.0.0.1:11211 display
  #  Item_Size  Max_age   Pages   Count   Full?
  1      96B      3600s       1     100      no
  2     120B      3600s       1      50      no

Choosing a Slab

When you store an item, Memcached calculates its size and finds the slab class that fits it best. If your item is 100 bytes, it goes into the 120-byte slab (if 96 is too small).

/* Internal conceptual logic for slab selection */
slabclass_id_t slabs_clsid(const size_t size) {
    int res = POWER_SMALLEST;

    if (size == 0) return 0;
    while (size > slabclass[res].size) {
        if (res++ == slabclass_max_id) return 0;
    }
    return res;
}

The Downside: Internal Fragmentation

The trade-off for zero external fragmentation is internal fragmentation. If you have many items that are just 1 byte larger than a slab size, you waste space. You can tune this with the -f (growth factor) parameter:

# memcached -m 1024 -f 1.05 -u root

By using a smaller growth factor, you get more slab classes that are closer together in size, reducing waste at the cost of slightly more metadata overhead. If you're not tuning your slabs, you're not really using Memcached.

Read Also

Kafka: Zero-Copy and Why It's Fast (2015)aunimeda
System Engineering

Kafka: Zero-Copy and Why It's Fast (2015)

How does Kafka push gigabits of data on commodity hardware? The secret isn't in the code; it's in the Linux kernel's sendfile() call.

Apache 2.0: Prefork vs. Worker MPM (2002)aunimeda
System Engineering

Apache 2.0: Prefork vs. Worker MPM (2002)

The multi-processing revolution is here. Should you stick with the stable prefork model or risk the high-concurrency worker threads?

Hunting Memory Leaks: Glibc Malloc Tuning (2001)aunimeda
System Engineering

Hunting Memory Leaks: Glibc Malloc Tuning (2001)

Is your long-running C daemon slowly consuming the entire server's RAM? Learn how to use MALLOC_CHECK_ and mallopt to keep glibc under control.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles