July 30th, 2024

A Visual Guide to LLM Quantization

Quantization reduces the memory footprint of large language models by converting high-precision parameters to lower-precision formats, maintaining accuracy while minimizing storage. Various methods include symmetric and asymmetric quantization.

Read original articleLink Icon
A Visual Guide to LLM Quantization

Large Language Models (LLMs) often require significant computational resources due to their size, typically containing billions of parameters. To address the challenges of running these models on consumer hardware, quantization has emerged as a key technique for reducing their memory footprint. This process involves converting high-precision parameters, such as 32-bit floating-point numbers, into lower-precision formats like 8-bit integers. While quantization can lead to some loss of precision, it aims to maintain model accuracy while minimizing storage requirements.

The article outlines various quantization methods, including symmetric and asymmetric quantization. Symmetric quantization maps the range of floating-point values to a symmetric range around zero, while asymmetric quantization shifts the mapping to accommodate different minimum and maximum values. Calibration techniques are essential for determining the optimal range of values to minimize quantization error, particularly for weights and biases, which are static, compared to activations that vary with input data.

Post-Training Quantization (PTQ) is a common approach where model parameters are quantized after training, using either dynamic or static methods to handle activations. Dynamic quantization collects activation distributions during inference to calculate necessary parameters for quantization. Overall, quantization is a critical area of research aimed at making LLMs more accessible and efficient for practical applications, balancing the trade-off between model size and performance.

Link Icon 8 comments
By @danieldk - 4 months
This is really an awesome introduction into quantization! One small comment about the GPTQ section:

It uses asymmetric quantization and does so layer by layer such that each layer is processed independently before continuing to the next

GPTQ also supports symmetric quantization and almost everyone uses it. The problem with GPTQ asymmetric quantization is that all popular implementations have a bug [1] where all zero/bias values of 0 are reset to 1 during packing (out of 16 possible biases in 4-bit quantization), leading to quite a large loss in quality. Interestingly, it seems that people initially observed that symmetric quantization worked better than asymmetric quantization (which is very counter-intuitive, but made GPTQ symmetric quantization far more popular) and only discovered later that it is due to a bug.

[1] https://notes.danieldk.eu/ML/Formats/GPTQ#Packing+integers

By @jillesvangurp - 4 months
Fairly helpful overview. One thing that probably has a good answer is why to use floats at all; even at 32 bits? Is there an advantage relative to using just 32 bit ints? It seems integer math is a lot easier to do in hardware. Back when I was young, you had to pay extra to get floating point hardware support in your PC. It required a co-processor. I'm assuming that is still somewhat true in terms of numbers of transistors needed on chips.

Intuitively, I like the idea of asymmetric scales as well. Treating all values as equal seems like it's probably wasteful in terms of memory. It would be interesting to see where typical values fall statistically in an LLM. I bet it's nowhere near a random distribution of values.

By @hazrmard - 4 months
I've read the huggingface blog on quantization, and a plethora of papers such as `bitsandbytes`. This was an approachable agglomeration of a lot of activity in this space with just the right references at the end. Bookmarked!
By @woodson - 4 months
It’s a shame that the article didn’t mention AWQ 4-bit quantization, which is quite widely supported in libraries and deployment tools (e.g. vLLM).
By @torginus - 4 months
I've long held the assumption that neurons in networks are just logic functions, where you can just write out their truth tables by taking all the combinations of their input activations and design an logic network that matches that 100% - thus 1-bit 'quantization' should be enough to perfectly recreate any neural network for inference.
By @llm_trw - 4 months
This is a very misleading article.

Floats are not distributed evenly across the number line. The number of floats between 0 and 1 is the same as the number of floats between 1 and 3, then between 3 and 7 and so on. Quantising well to integers means that you take this sensitivity into account since the spacing between integers is always the same.

By @dleeftink - 4 months
What an awesome collection of visual mappings between process and output, immediately gripping, visually striking and thoughtfully laid out. I'd love to hear more about the process behind them, a hallmark in exploratory visualisation.
By @cheptsov - 4 months
I wonder why AWQ is not mentioned. It’s pretty popular and I always was curious how it is different from GPTQ.