From 513949f95f3d0dd1c4d5843b6b8291b2531ad31c Mon Sep 17 00:00:00 2001 From: Kunshang Ji Date: Thu, 12 Mar 2026 09:46:02 +0800 Subject: [PATCH] [XPU][Doc] Remove manual OneAPI install step, now handled by torch-xpu (#36831) Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> --- docs/getting_started/installation/gpu.xpu.inc.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/getting_started/installation/gpu.xpu.inc.md b/docs/getting_started/installation/gpu.xpu.inc.md index ed7acb48b..9e71860d6 100644 --- a/docs/getting_started/installation/gpu.xpu.inc.md +++ b/docs/getting_started/installation/gpu.xpu.inc.md @@ -7,7 +7,6 @@ vLLM initially supports basic model inference and serving on Intel GPU platform. --8<-- [start:requirements] - Supported Hardware: Intel Data Center GPU, Intel ARC GPU -- OneAPI requirements: oneAPI 2025.3 - Dependency: [vllm-xpu-kernels](https://github.com/vllm-project/vllm-xpu-kernels): a package provide all necessary vllm custom kernel when running vLLM on Intel GPU platform, - Python: 3.12 !!! warning @@ -26,8 +25,8 @@ Currently, there are no pre-built XPU wheels. --8<-- [end:pre-built-wheels] --8<-- [start:build-wheel-from-source] -- First, install required [driver](https://dgpu-docs.intel.com/driver/installation.html#installing-gpu-drivers) and [Intel OneAPI](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) 2025.3 or later. -- Second, install Python packages for vLLM XPU backend building: +- First, install required [driver](https://dgpu-docs.intel.com/driver/installation.html#installing-gpu-drivers). +- Second, install Python packages for vLLM XPU backend building (Intel OneAPI dependencies are installed automatically as part of `torch-xpu`, see [PyTorch XPU get started](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html)): ```bash git clone https://github.com/vllm-project/vllm.git