[docs] standardize Hugging Face env var to HF_TOKEN (deprecates HUGGING_FACE_HUB_TOKEN) (#27020)

Signed-off-by: Kay Yan <kay.yan@daocloud.io>
This commit is contained in:
Kay Yan
2025-10-16 22:31:02 +08:00
committed by GitHub
parent 4a510ab487
commit 02d709a6f1
3 changed files with 8 additions and 8 deletions

View File

@@ -10,7 +10,7 @@ The image can be used to run OpenAI compatible server and is available on Docker
```bash
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_TOKEN=$HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
vllm/vllm-openai:latest \
@@ -22,7 +22,7 @@ This image can also be used with other container engines such as [Podman](https:
```bash
podman run --device nvidia.com/gpu=all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=$HF_TOKEN" \
--env "HF_TOKEN=$HF_TOKEN" \
-p 8000:8000 \
--ipc=host \
docker.io/vllm/vllm-openai:latest \
@@ -128,7 +128,7 @@ To run vLLM with the custom-built Docker image:
docker run --runtime nvidia --gpus all \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-p 8000:8000 \
--env "HUGGING_FACE_HUB_TOKEN=<secret>" \
--env "HF_TOKEN=<secret>" \
vllm/vllm-openai <args...>
```