[Doc] Fix some MkDocs snippets used in the installation docs (#20572)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-07-07 15:44:34 +01:00
committed by GitHub
parent b8a498c9b2
commit 1ad69e8375
8 changed files with 10 additions and 26 deletions

View File

@@ -54,9 +54,6 @@ If the build has error like the following snippet where standard C++ headers can
```
# --8<-- [end:build-wheel-from-source]
# --8<-- [start:set-up-using-docker]
# --8<-- [end:set-up-using-docker]
# --8<-- [start:pre-built-images]
# --8<-- [end:pre-built-images]

View File

@@ -28,9 +28,6 @@ ARM CPU backend currently supports Float32, FP16 and BFloat16 datatypes.
Testing has been conducted on AWS Graviton3 instances for compatibility.
# --8<-- [end:build-wheel-from-source]
# --8<-- [start:set-up-using-docker]
# --8<-- [end:set-up-using-docker]
# --8<-- [start:pre-built-images]
# --8<-- [end:pre-built-images]

View File

@@ -56,9 +56,6 @@ Execute the following commands to build and install vLLM from the source.
```
# --8<-- [end:build-wheel-from-source]
# --8<-- [start:set-up-using-docker]
# --8<-- [end:set-up-using-docker]
# --8<-- [start:pre-built-images]
# --8<-- [end:pre-built-images]

View File

@@ -31,9 +31,6 @@ vLLM initially supports basic model inferencing and serving on x86 CPU platform,
- If you want to force enable AVX512_BF16 for the cross-compilation, please set environment variable `VLLM_CPU_AVX512BF16=1` before the building.
# --8<-- [end:build-wheel-from-source]
# --8<-- [start:set-up-using-docker]
# --8<-- [end:set-up-using-docker]
# --8<-- [start:pre-built-images]
See [https://gallery.ecr.aws/q9t5s3a7/vllm-cpu-release-repo](https://gallery.ecr.aws/q9t5s3a7/vllm-cpu-release-repo)