April 25th, 2025

70% Size, 100% Accuracy: Lossless LLM Compression via Dynamic-Length Float

The paper presents DFloat11, a compression framework that reduces large language model size by 30% while maintaining accuracy, enhancing throughput, and enabling lossless inference on multiple GPUs.

Read original articleLink Icon
ExcitementCuriositySkepticism
70% Size, 100% Accuracy: Lossless LLM Compression via Dynamic-Length Float

The paper titled "70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float" introduces a novel compression framework called Dynamic-Length Float (DFloat11) aimed at reducing the size of large language models (LLMs) by 30% while maintaining bit-for-bit accuracy. The authors, including Tianyi Zhang and others, highlight the inefficiencies in the BFloat16 weight representation of LLMs, which DFloat11 addresses through entropy coding. This method assigns dynamic-length encodings to weights based on their frequency, achieving optimal compression without precision loss. To support efficient inference, a custom GPU kernel is developed for fast online decompression, which includes strategies like compacting memory-intensive lookup tables and minimizing latency through transformer-block-level decompression. Experimental results demonstrate that DFloat11 not only reduces model size but also significantly enhances throughput in token generation, achieving 1.9-38.8 times higher performance compared to uncompressed models. Additionally, it allows for longer context lengths within a fixed GPU memory budget and enables lossless inference of large models, such as Llama-3.1-405B, on a single node with multiple GPUs. The authors provide access to their code and models for further research and application.

- DFloat11 reduces LLM size by 30% while preserving accuracy.

- The framework utilizes dynamic-length encodings based on weight frequency.

- It significantly improves throughput in token generation compared to uncompressed models.

- The method allows for longer context lengths within the same GPU memory constraints.

- Lossless inference of large models is achievable on a single node with multiple GPUs.

AI: What people are saying
The comments on the DFloat11 compression framework reveal several key themes and insights regarding its implications and performance.
  • Many users highlight the significant practical benefits of enabling lossless inference for large models, particularly for research labs and startups.
  • There is a discussion about the efficiency of DFloat11 compared to existing methods, with some users questioning its performance at smaller batch sizes.
  • Several comments express excitement about the rapid advancements in machine learning and the potential for further optimizations in hardware.
  • Concerns are raised about the specific applicability of DFloat11, particularly regarding its effectiveness with non-BFloat16 models.
  • Some users seek clarification on the term "lossless" in the context of DFloat11, questioning its conventional meaning in compression.
Link Icon 22 comments
By @jhj - about 12 hours
This is just a consequence of the fact that bfloat16 has a very high dynamic range which is not all used. People like hyperparameters that look like 0.01 not 10^10, even though there is the same fractional precision available at each exponent and if you multiplied everything - hyperparameters, initialized weights, training data, etc in a network by 10^6 things will still work more or less the same since the upper range is hardly used (with the possible exception of some small number of special functions).

Typical entropy of bfloat16 values seen in weights (and activations) are about 10-12 bits (only 65-75% or so of the value range is used in practice). Sign and mantissa bits tend to be incompressible noise.

This has been exploited several times before in the context of both classical HPC and AI, with lossless compression work from Martin Burtscher's lab (https://userweb.cs.txstate.edu/~burtscher/), fpzip from LLNL (https://computing.llnl.gov/projects/fpzip) and my library dietgpu from 2021 (https://github.com/facebookresearch/dietgpu) which we used to speed training on a large GPU cluster by about 10% wall clock time overall by losslessly compressing all data prior to send and decompressing upon receive (e.g., gradients, weights from backup, etc), which is still computing the same thing as it did before as it is lossless.

Also, rANS is more efficient and easier to implement in SIMD-like instruction sets than Huffman coding. It would reduce the performance latency/throughput penalties as well with DFloat11 (since we have to decompress before we do the arithmetic).

By @badmonster - about 14 hours
What stands out most is the practical implication: enabling lossless inference of a 405B-parameter model on a single node with 8×80GB GPUs is wild. That’s a huge unlock for research labs and startups alike that want to run frontier models without massive infrastructure costs.
By @loufe - about 14 hours
I'm so grateful to live through such exciting times. I can open HN every two to some exciting new news about ML/transformer models. I really should read more into it, but does llama.cpp use a "custom kernel" per se, with cublas, or is it just making good use of the cublas kernal?
By @Animats - about 12 hours
Once this weight format war settles down, hardware can be built to support it. Presumably you want matrix multiply hardware optimized for whatever weight format turns out to be reasonably optimal.
By @aseligman - about 11 hours
Some additional context: many real world agent use cases struggle to balance quality, cost, and performance. This technique can help avoid the tradeoffs that quantization techniques introduce, including unpredictable results while you try cost optimize an agent. In some cases the cost savings can be significant using dfloat11 as you squeeze into more affordable GPUs.

* I work with xmad.ai

By @gitroom - about 5 hours
Pretty cool seeing how fast all this moves - feels like every week theres a new trick or hardware upgrade. I def get nerd sniped by these efficiency improvements lol.
By @yjftsjthsd-h - about 13 hours
> Compared to a potential alternative of offloading parts of an uncompressed model to the CPU to meet memory constraints, DFloat11 achieves 1.9-38.8x higher throughput in token generation. With a fixed GPU memory budget, DFloat11 enables 5.3-13.17x longer context lengths than uncompressed models.

The context length alone probably makes it worthwhile even if your models fit in memory, but I'm curious if it improves tokens/sec even all on GPU, since in my very amateur understanding LLMs tend to be constrained by memory bandwidth?

By @thund - about 10 hours
Is this different than ZipNN? https://arxiv.org/pdf/2411.05239

I see it mentioned but can’t understand if it’s based on it or different/better…

By @firefoxd - about 6 hours
Someone has figured out how to compress images even further with LLMs. They promised to published a white paper since last year: https://getproxyai.com/blog/this-image-is-4KB

/s I'll show myself out

By @wills_forward - about 14 hours
So this could universally decrease the memory requirements by un-quantitized LLMs by 30%? Seems big if true.
By @jsemrau - about 8 hours
I still hold the opinion that ternary instead of binary would lead to an even higher degree of compression.
By @mountainriver - about 14 hours
Is it possible to run this on new models? It seem like the code is only for inference, unless I’m misunderstanding
By @luotuoshangdui - about 13 hours
Does it affect speed?
By @aazo11 - about 11 hours
This is a huge unlock for on-device inference. The download time of larger models makes local inference unusable for non-technical users.
By @marksimi - about 13 hours
Time to (dynamically) float
By @iamnotagenius - about 14 hours
Interesting, but not exactly practical for a local LLM user, as 4-bit is how LLM's are run locally.
By @ein0p - about 14 hours
Note that this is _way_ slower at small batch sizes you'd need for interactive use. At batch size 1 this seems to run at 1/3rd the speed of bf16 (so about 1/6th the speed of fp8 you'd realistically be using) if figure 5 is to be believed. This is actually a pretty impressive feat in itself if you know anything about GPU kernel programming, but it is much slower nevertheless. For this to work at "wire speed" it'd need hardware support, which takes years. Their "baseline" elsewhere in the paper is CPU offloading, which is dog slow and can't be made fast due to PCIe bottleneck.
By @hchja - about 13 hours
This is pretty useless in any case that doesn’t involve BFloat16 models
By @anticensor - about 12 hours
This is just a VBR mode for neural networks. Not quite useful when inference is already quite slow.
By @Havoc - about 14 hours
I'm guessing by lossless they mean something other than what the word usually means in compression context?

>achieving near information-optimal compression without any loss of precision

So perhaps more lossless as in didn't lose perplexity/benchmarks?

In my mind lossless is precisely zero bits lost along the way.