[Doc] int4 w4a16 example (#12585)

Based on a request by @mgoin , with @kylesayrs we have added an example
doc for int4 w4a16 quantization, following the pre-existing int8 w8a8
quantization example and the example available in
[`llm-compressor`](https://github.com/vllm-project/llm-compressor/blob/main/examples/quantization_w4a16/llama3_example.py)

FIX #n/a (no issue created)

@kylesayrs and I have discussed a couple additional improvements for the
quantization docs. We will revisit at a later date, possibly including:
- A section for "choosing the correct quantization scheme/ compression
technique"
- Additional vision or audio calibration datasets

---------

Signed-off-by: Brian Dellabetta <bdellabe@redhat.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
This commit is contained in:
Brian Dellabetta
2025-01-31 17:38:48 -06:00
committed by GitHub
parent 60808bd4c7
commit 44bbca78d7
3 changed files with 169 additions and 2 deletions

View File

@@ -12,6 +12,7 @@ supported_hardware
auto_awq
bnb
gguf
int4
int8
fp8
quantized_kvcache