[DOC] [ROCm] Update docker deployment doc (#33971)
Signed-off-by: vllmellm <vllm.ellm@embeddedllm.com> Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Co-authored-by: TJian <tunjian.tan@embeddedllm.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
@@ -1,3 +1,7 @@
|
||||
---
|
||||
toc_depth: 3
|
||||
---
|
||||
|
||||
# GPU
|
||||
|
||||
vLLM is a Python library that supports the following GPU variants. Select your GPU type to see vendor specific instructions:
|
||||
@@ -84,6 +88,9 @@ vLLM is a Python library that supports the following GPU variants. Select your G
|
||||
|
||||
### Pre-built images
|
||||
|
||||
<!-- markdownlint-disable MD025 -->
|
||||
# --8<-- [start:pre-built-images]
|
||||
|
||||
=== "NVIDIA CUDA"
|
||||
|
||||
--8<-- "docs/getting_started/installation/gpu.cuda.inc.md:pre-built-images"
|
||||
@@ -96,7 +103,15 @@ vLLM is a Python library that supports the following GPU variants. Select your G
|
||||
|
||||
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:pre-built-images"
|
||||
|
||||
# --8<-- [end:pre-built-images]
|
||||
<!-- markdownlint-enable MD025 -->
|
||||
|
||||
<!-- markdownlint-disable MD001 -->
|
||||
### Build image from source
|
||||
<!-- markdownlint-enable MD001 -->
|
||||
|
||||
<!-- markdownlint-disable MD025 -->
|
||||
# --8<-- [start:build-image-from-source]
|
||||
|
||||
=== "NVIDIA CUDA"
|
||||
|
||||
@@ -110,6 +125,9 @@ vLLM is a Python library that supports the following GPU variants. Select your G
|
||||
|
||||
--8<-- "docs/getting_started/installation/gpu.xpu.inc.md:build-image-from-source"
|
||||
|
||||
# --8<-- [end:build-image-from-source]
|
||||
<!-- markdownlint-enable MD025 -->
|
||||
|
||||
## Supported features
|
||||
|
||||
=== "NVIDIA CUDA"
|
||||
|
||||
Reference in New Issue
Block a user