[Doc] Fix some MkDocs snippets used in the installation docs (#20572)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-07-07 15:44:34 +01:00
committed by GitHub
parent b8a498c9b2
commit 1ad69e8375
8 changed files with 10 additions and 26 deletions

View File

@@ -2,6 +2,9 @@
vLLM supports AMD GPUs with ROCm 6.3.
!!! tip
[Docker](#set-up-using-docker) is the recommended way to use vLLM on ROCm.
!!! warning
There are no pre-built wheels for this device, so you must either use the pre-built Docker image or build vLLM from source.
@@ -14,6 +17,8 @@ vLLM supports AMD GPUs with ROCm 6.3.
# --8<-- [end:requirements]
# --8<-- [start:set-up-using-python]
There is no extra information on creating a new Python environment for this device.
# --8<-- [end:set-up-using-python]
# --8<-- [start:pre-built-wheels]
@@ -123,9 +128,7 @@ Currently, there are no pre-built ROCm wheels.
- For MI300x (gfx942) users, to achieve optimal performance, please refer to [MI300x tuning guide](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/index.html) for performance optimization and tuning tips on system and workflow level.
For vLLM, please refer to [vLLM performance optimization](https://rocm.docs.amd.com/en/latest/how-to/tuning-guides/mi300x/workload.html#vllm-performance-optimization).
## Set up using Docker (Recommended)
# --8<-- [end:set-up-using-docker]
# --8<-- [end:build-wheel-from-source]
# --8<-- [start:pre-built-images]
The [AMD Infinity hub for vLLM](https://hub.docker.com/r/rocm/vllm/tags) offers a prebuilt, optimized
@@ -227,4 +230,3 @@ Where the `<path/to/model>` is the location where the model is stored, for examp
See [feature-x-hardware][feature-x-hardware] compatibility matrix for feature support information.
# --8<-- [end:supported-features]
# --8<-- [end:extra-information]