March 16th, 2025

DiceDB

DiceDB is an open-source, reactive in-memory database optimized for modern hardware, outperforming Redis in throughput and latency, and encouraging community contributions under the BSD 3-Clause License.

Read original articleLink Icon
DiceDB

DiceDB is an open-source, fast, reactive, in-memory database designed for modern hardware. It features query subscriptions that allow it to push result sets to users instead of requiring them to poll for changes. This design enhances performance, achieving higher throughput and lower median latencies, making it suitable for contemporary workloads. Performance benchmarks on a Hetzner CCX23 machine with 4 vCPUs and 16GB RAM show that DiceDB outperforms Redis in several metrics, including throughput (15,655 ops/sec compared to Redis's 12,267 ops/sec) and latency for GET and SET operations. DiceDB's GET and SET p50 latencies are 0.227 ms and 0.230 ms, respectively, while Redis's are 0.270 ms and 0.272 ms. The database is fully optimized to utilize the underlying hardware effectively, ensuring better performance. DiceDB is licensed under the BSD 3-Clause License, encouraging community contributions to enhance its capabilities. The project aims to provide a user-friendly experience for developers looking to implement a reactive database solution.

- DiceDB is an open-source, reactive in-memory database optimized for modern hardware.

- It features query subscriptions, allowing real-time updates without polling.

- Performance benchmarks show DiceDB outperforms Redis in throughput and latency.

- The database is designed for high throughput and efficient hardware utilization.

- Community contributions are encouraged under the BSD 3-Clause License.

Link Icon 32 comments
By @kiitos - about 1 month
There are _so many_ bugs in this code.

One example among many:

https://github.com/DiceDB/dice/blob/0e241a9ca253f17b4d364cdf... defines func ExpandID, which reads from cycleMap without locking the package-global mutex; and func NextID, which writes to cycleMap under a lock of the package-global mutex. So writes are synchronized, but only between each other, and not with reads, so concurrent calls to ExpandID and NextID would race.

This is all fine as a hobby project or whatever, but very far from any kind of production-capable system.

By @deazy - about 1 month
Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.

I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?

I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.

From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.

But I have the following question.

What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?

Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?

How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?

Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?

I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.

Thanks

By @bdcravens - about 1 month
Is there a single sentence anywhere that describes what it actually is?
By @schmookeeg - about 1 month
Using an instrument of chance to name a data store technology is pretty amusing to me.
By @cozzyd - about 1 month
DiceDB sounds like the name of a joke database that returns random results.
By @weekendcode - about 1 month
From the benchmarks on 4vCPU and num_clients=4, the numbers doesn't look much different.

Reactive looks promising, doesn't look much useful in realworld for a cache. For example, a client subscribes for something and the machines goes down, what happens to reactivity?

By @alexey-salmin - about 1 month

  | Metric               | DiceDB   | Redis    |
  | -------------------- | -------- | -------- |
  | Throughput (ops/sec) | 15655    | 12267    |
  | GET p50 (ms)         | 0.227327 | 0.270335 |
  | GET p90 (ms)         | 0.337919 | 0.329727 |
  | SET p50 (ms)         | 0.230399 | 0.272383 |
  | SET p90 (ms)         | 0.339967 | 0.331775 |
UPD Nevermind, I didn't have my eyes open. Sorry for the confusion.

Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).

I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.

If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.

Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?

By @OutOfHere - about 1 month
In-memory caches (lacking persistence) shouldn't be called a database. It's not totally incorrect, but it's an abuse of terminology. Why is a Python dictionary not an in-memory key-value database?
By @ac130kz - about 1 month
Any reason to use this over Valkey, which is now faster than Redis and community driven? Genuinely interested.
By @losvedir - about 1 month
I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?
By @remram - about 1 month
This seems orders of magnitude slower than Nubmq which was posted yesterday: https://news.ycombinator.com/item?id=43371097
By @huntaub - about 1 month
What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?
By @alexpadula - about 1 month
15655 ops a second with a Hetzner CCX23 machine with 4 vCPU and 16GB RAM is rather slow for an in-memory database I hate to say it. You can't blame that on network latency as for example supermassivedb.com is written in go and achieves magnitudes more, actually x20 and it's persisted.. I must investigate the bottlenecks with Dice.
By @rebolek - about 1 month
- proudly open source. cool! - join discord. YAY :(
By @throwaway2037 - about 1 month
FYI: Here is the creator and maintainer's profile: https://github.com/arpitbbhayani

Is there a plan to commercialise this product? (Offer commercial support, features, etc.) I could not find anything obvious from the home page.

By @sidcool - about 1 month
Is Arpit is the system design course guy?
By @Aeolun - about 1 month
I feel like this needs a ‘Why DiceDB instead of Redis or Valtio’ section prominently on the homepage.
By @DrammBA - about 1 month
I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.
By @datadeft - about 1 month
Is this suffering from the same problems like Redis when trying to horizontally scale?
By @re-lre-l - about 1 month
> For Modern Hardware fully utilizes underlying core to get higgher throughput and better hardware utilization.

Would be great to disclose details of this one. I'm interested in using what DiceDB achieves higher throughput.

By @robertlagrant - about 1 month
> fully utilizes underlying core to get higgher throughput and better hardware utilization

FYI this is a misspelling of "higher"

By @nylonstrung - about 1 month
Who is this for? Can you help me explain why and when I'd want to use this in place of redis/dragonfly
By @deadbabe - about 1 month
I think Postgres can do everything this does and better if you use LISTEN/NOTIFY.
By @999900000999 - about 1 month
I like it!

Anyway to persist data in case of reboots?

That's the only thing missing here.

Is Go the only SDK ?

By @retropragma - about 1 month
Why would I use this over keyspace notifications in redis?
By @rednafi - about 1 month
Database as a transport?
By @spiderfarmer - about 1 month
DiceDB is an in-memory, multi-threaded key-value DBMS that supports the Redis protocol.

It’s written in Go.

By @bitlad - about 1 month
I think performance benchmark you have done for DiceDB is fake.

These are the real numbers - https://dzone.com/articles/performance-and-scalability-analy...

Does not match with your benchmarks.