Original Paper: https://arxiv.org/abs/2211.10438

By: Guangxuan XiaoJi LinMickael SeznecHao WuJulien DemouthSong Han

Abstract:

Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference.

However, existing methods cannot maintain accuracy and hardware efficiency at the same time.

We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs.

Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by offline migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation.

SmoothQuant enables an INT8 quantization of both weights and activations for all the matrix multiplications in LLMs, including OPT, BLOOM, GLM, MT-NLG, Llama-1/2, Falcon, Mistral, and Mixtral models.

We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. SmoothQuant enables serving 530B LLM within a single node. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs.


Summary Notes

24.PNG

Figure: The model size of large language models is developing at a faster pace than the GPU memory in recent years, leading to a big gap between the supply and demand for memory. Quantization and model compression techniques can help bridge the gap.

Introduction

Large Language Models (LLMs) like GPT-3, BLOOM, and MT-NLG have revolutionized natural language processing thanks to their impressive performance across a myriad of tasks.

However, their immense size—running into hundreds of billions of parameters—makes them compute and memory-intensive, driving up the cost of deployment and inference.

Enter SmoothQuant, a novel post-training quantization (PTQ) technique that promises to deliver efficient and accurate quantization for these behemoths, enabling 8-bit weight and activation (W8A8) quantization without sacrificing performance.

The Challenge: Quantization of LLMs

Quantization is a method used to reduce the precision of the numbers that represent the model parameters and activations, thereby reducing memory usage and accelerating computations.

While it has been successfully applied to smaller models, quantizing LLMs poses a unique challenge due to the presence of activation outliers—values with significantly larger magnitudes than the rest.

These outliers stretch the quantization range, leading to a loss of precision for the majority of activation values, and ultimately, a drop in model accuracy.

The SmoothQuant Solution

Key Methodologies