[Doc] Installed version of llmcompressor for int8/fp8 quantization (#11103)
Signed-off-by: Guangda Liu <bingps@users.noreply.github.com> Co-authored-by: Guangda Liu <bingps@users.noreply.github.com>
This commit is contained in:
@@ -45,7 +45,7 @@ To produce performant FP8 quantized models with vLLM, you'll need to install the
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ pip install llmcompressor==0.1.0
|
||||
$ pip install llmcompressor
|
||||
|
||||
Quantization Process
|
||||
--------------------
|
||||
|
||||
Reference in New Issue
Block a user