From dff680001dbe4e9ab1b1defdc2a5d17561122931 Mon Sep 17 00:00:00 2001 From: niu_he Date: Thu, 12 Jun 2025 17:24:45 +0800 Subject: [PATCH] Fix typo (#19525) Signed-off-by: 2niuhe --- examples/offline_inference/basic/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/offline_inference/basic/README.md b/examples/offline_inference/basic/README.md index 5cb0177b3..0a2bd6e2b 100644 --- a/examples/offline_inference/basic/README.md +++ b/examples/offline_inference/basic/README.md @@ -70,7 +70,7 @@ Try one yourself by passing one of the following models to the `--model` argumen vLLM supports models that are quantized using GGUF. -Try one yourself by downloading a GUFF quantised model and using the following arguments: +Try one yourself by downloading a quantized GGUF model and using the following arguments: ```python from huggingface_hub import hf_hub_download