This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
b12f4a983077f0f085e3734d4d5b0c25f2576cec
vllm
/
.buildkite
History
Yi Liu
0d8a7d8a26
[Compressed Tensors] Add XPU
wNa16
support (
#29484
)
...
Signed-off-by: yiliu30 <
yi4.liu@intel.com
>
2025-12-05 22:02:09 +08:00
..
lm-eval-harness
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (
#28695
)
2025-12-04 16:07:20 -08:00
performance-benchmarks
[vLLM Benchmark Suite] Add default parameters section and update CPU benchmark cases (
#29381
)
2025-12-02 09:00:23 +00:00
scripts
[Compressed Tensors] Add XPU
wNa16
support (
#29484
)
2025-12-05 22:02:09 +08:00
check-wheel-size.py
[CI] Raise VLLM_MAX_SIZE_MB to 500 due to failing Build wheel - CUDA 12.9 (
#26722
)
2025-10-14 10:52:05 -07:00
release-pipeline.yaml
[CI] Renovation of nightly wheel build & generation (take 2) (
#29838
)
2025-12-01 22:17:10 -08:00
test-amd.yaml
[CI/Build][AMD] Add Llama4 Maverick FP8 to AMD CI (
#28695
)
2025-12-04 16:07:20 -08:00
test-pipeline.yaml
[CI/Build] Update batch invariant test trigger (
#30080
)
2025-12-05 00:42:37 +00:00