Correct PowerPC to modern IBM Power (#15635)

Signed-off-by: Christy Norman <christy@linux.vnet.ibm.com>
This commit is contained in:
cnorman
2025-03-27 17:04:32 -05:00
committed by GitHub
parent 4098b72210
commit 32d669275b

View File

@@ -43,7 +43,7 @@ vLLM is flexible and easy to use with:
- Tensor parallelism and pipeline parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, Gaudi® accelerators and GPUs, PowerPC CPUs, TPU, and AWS Trainium and Inferentia Accelerators.
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs, Gaudi® accelerators and GPUs, IBM Power CPUs, TPU, and AWS Trainium and Inferentia Accelerators.
- Prefix caching support
- Multi-lora support