[Doc] Installed version of llmcompressor for int8/fp8 quantization (#11103)

Signed-off-by: Guangda Liu <bingps@users.noreply.github.com>
Co-authored-by: Guangda Liu <bingps@users.noreply.github.com>
This commit is contained in:
bingps
2024-12-11 23:43:24 +08:00
committed by GitHub
parent b2f775456e
commit fd22220687
2 changed files with 3 additions and 3 deletions

View File

@@ -45,7 +45,7 @@ To produce performant FP8 quantized models with vLLM, you'll need to install the
.. code-block:: console
$ pip install llmcompressor==0.1.0
$ pip install llmcompressor
Quantization Process
--------------------