Versions locked: - vLLM: v0.18.2rc0 - flashinfer: v0.6.7 - flash-attention: hopper branch - lmcache: dev branch - infinistore: main - triton: 3.6.0 (PyPI wheel) - Base: nvcr.io/nvidia/pytorch:26.03-py3 (PyTorch 2.11.0a0, CUDA 13.2.0) DO NOT MODIFY WITHOUT MIKE'S APPROVAL
VLLM images for GH200
Hosted here
docker login
# Alternative
# docker buildx build --platform linux/arm64 --memory=600g -t rajesh550/gh200-vllm:0.9.0.1 .
docker build --memory=450g --platform linux/arm64 -t rajesh550/gh200-vllm:0.11.1rc2 . 2>&1 | tee build.log
docker push rajesh550/gh200-vllm:0.11.1rc2