Skip to content

Keydb_eng -

| Workload | Redis 6 (single-thread) | KeyDB (8 workers) | Gain | |----------|------------------------|-------------------|------| | 100% GET | ~450k ops/sec | ~2.8M ops/sec | 6.2x | | 80% GET, 20% SET | ~380k ops/sec | ~2.1M ops/sec | 5.5x | | 100% SET | ~400k ops/sec | ~1.9M ops/sec | 4.75x |

Blocking commands require careful cross-thread signaling. KeyDB uses a global waiting queue protected by a separate mutex. When data arrives (e.g., LPUSH on a list), the notifying thread checks the waiting queue and wakes the appropriate worker thread, which then resumes the blocked client. keydb_eng

int setCommand(client *c) unique_lock(server.dict_lock); // exclusive lock setKey(c->db, key, val); unique_unlock(server.dict_lock); | Workload | Redis 6 (single-thread) | KeyDB

As of 2025, KeyDB remains a niche but powerful tool — especially in cloud environments where CPU cores are plentiful and predictable low-latency under concurrency matters more than strict serializability. Would you like a deeper analysis of KeyDB’s active-replica architecture or its memory allocator modifications? int setCommand(client *c) unique_lock(server

Each worker maintains its own aeEventLoop (async event library), epoll/kqueue fd set, and client list. // db.c excerpt (conceptual) int getGenericCommand(client *c) shared_lock(server.dict_lock); // shared lock robj *o = lookupKey(c->db, c->argv[1]); shared_unlock(server.dict_lock); // ...

keydb_eng
keydb_eng

Interested in automating the way you get paid? GoCardless can help