Valkey's lean memory tactics amid global DRAM crunch
The technology sector is currently navigating a significant DRAM shortage. As hardware becomes a bottleneck for technology programs, the burden of performance has shifted back to the software layer, requiring teams to prioritize lean architecture and nimble resource management.
In a conversation with Laura Czajkowski, Head of Community at Percona, and Martin Visser, Valkey Technical Lead at Percona, we explore how focusing on internal data infrastructure and refined memory allocation allows developers to maintain high-speed user experiences even within constrained hardware budgets.
How are you assessing DRAM pressures right now? Are there signals or thresholds that would be big red flags?
Laura Czajkowski, Head of Community, Percona:
The canary in the coal mine right now is the consumer market for DRAM. Consumer devices, small to large, are upping prices for products with more RAM and, as a consequence, these consumers are questioning applications that use a lot of RAM. It's a decent question: if my brand new device can't run the latest game or productivity app: is it really a good piece of software?
Consumer decision making processes are obviously very different from how enterprises think about things but the same fundamental question remains "if the compute in my budget now has less RAM, how do I make it work?" Teams will have to wrestle with this as infrastructure planning comes around and the best choice to weather this sort of shortage is to pivot what you do have control over: the software.
So, we're listening carefully for big questions from our users about this: what we don't want to hear is making users absorb this hit by tolerating slow services. No one wants that - no one wins. We want to be ready with solutions to help users optimize software and services instead of degrading the performance.
Has the industry seen DRAM supply issues like this in the past?
Martin Visser, Valkey Technical Lead, Percona:
DRAM shortages happen but the dynamics and the magnitude were vastly different than today. In the past the industry has seen things like fire damaged factories, supplier consolidation, and even changing consumer behavior during the initial rise of smartphones that drive shortages. Some might also recall the CPU shortage that occurred in 2020 during the global pandemic. You can even find news stories from around a decade ago where analysts were wringing hands because shortages were causing DRAM prices to not fall as fast as years previous.
In technology, we expect last year's hardware to cost less than this year's. When that inverts, like what this shortage appears to be causing, software has to pick up the slack. Developers have to fight against a compounded Wirth's Law ('software manages to outgrow hardware in size and sluggishness', in other
words, software is getting slower at a pace more quickly than hardware gets faster). So, developers are not only contending with the typical problems of creeping software sluggishness but also a corresponding slowed, or reversed, hardware progress.
Valkey is not typically CPU bound (like a lot of software), but rather memory-bound, so we think about RAM consumption a lot. Last year, before the shortage was apparent, in Valkey we were already thinking about how to beat Wirth's Law by introducing several optimizations that made newer versions use less memory than previous versions.
Do you expect changes to pricing in the next six months? In a year?
Laura:
It's hard to say, but it's going to be a while. My sense is that we're in a holding pattern: no one knows today if this is a blip and things revert to the old normal or if this is a new normal and DRAM manufacturing will scale up to compensate. Manufacturers scaling up here isn't easy, fast, nor cheap.
Considering all of this and being on the software side, the wager is pretty simple. Optimize your stack now, paying close attention to memory usage. This is a durable advantage: even if DRAM prices normalize, the investment in optimizing things like your cache, an expensive part of any application architecture even under normal conditions, pays dividends. If you don't optimize this sort of thing now and expensive DRAM is a protracted problem, you're going to pay steeply either in budget or poor user experience (impacting engagement, conversion, etc.).
How is Valkey ensuring that DRAM usage is sustainable for users? How are these decisions made and how does that impact the project?
Martin:
I don't want to say that performance is a solved problem for Valkey, it's not and we'll still make leaps in this area for future releases, but the Valkey project has a specific interest in lowering memory overhead. Valkey uses system memory as storage, so unlike many other databases, lowering RAM usage directly translates to more capacity for users.
Over the past year, Valkey made many small optimizations to internal structures used throughout the software, saving handfuls of bits and bytes in individual places that add up to massively lower RAM consumption at scale – based on being able to store more keys per node, Valkey 8.0 is up to 20% more efficient than Valkey 7.2. Additionally, Valkey added features that enable refactoring of key names to reduce repetition in namespacing, empowering users to make small changes that slam down overall memory usage. We really sweat memory usage in all our decisions: anything that uses more memory is often a non-starter unless there is an extremely compelling reason. The mult-database in cluster mode added in Valkey 9.0 adds more consolidation options, so you can reduce resource usage too. Additionally, Valkey is written in C, so allocating memory becomes very apparent to those developing Valkey compared to higher level languages which abstract developers away from allocation.
Does Valkey have any guidance for users it can offer now?
Martin:
Optimize, measure, and optimize again. There are a few levers to pull for Valkey. Our more recent releases stack up nicely on lower memory usage (and, well, increased throughput and better latency too) and the project has a very rigorous commitment to no breaking changes for our API. So, counter to a lot of other software: upgrade for lower memory usage. Of course, in this process you can observe the changes to measure the difference. You should also review your memory settings and eviction policy,
Valkey is a very forgiving piece of software – so using it suboptimally still adds value. However, now is the time to dive deeper into Valkey and find out the ways to tailor your usage to squeeze better efficiency. Evaluating your usage against different data structures available in Valkey can make a big difference in memory usage, sometimes up to 50% by picking the right structure.
Are there steps users can take right now to reduce RAM pressure without upending their software?
Martin:
Absolutely. Migrate to Valkey if you're on any other legacy RESP compatible software to one of our recent releases (9.0.x). If you're on Valkey and using an older version, go to newer versions. Valkey keeps a very stable API so your software that depends on Valkey should need no changes or upending.
From there, the name of the game is efficiency. If you're already on latest and you're using volatile keys, take a look if you're using the right eviction policy: every usage scenario differs but testing least recently used vs. least frequently used. Additionally, playing with your time-to-live on volatile keys can shrink any
needless retention of data, of course, this has to be done carefully to balance any user impact if approached too aggressively.