Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
ff6c1da4e6ab8d020b41c23166c7b482c047c81a
vllm/vllm/platforms
History
monajafi-amd 97ef11dd34 [ROCm][ViT] Enable Flash Attention Triton backend on RDNA3/RDNA4 (#32944)
Signed-off-by: mohammad najafi <mohammad.najafi@amd.com>
2026-01-24 10:03:07 +08:00
..
__init__.py
[TPU] Rename path to tpu platform (#28452)
2025-11-11 19:16:47 +00:00
cpu.py
[Bugfix][Attention] Explicitly report support for kv_cache_dtype bfloat16 (#32795)
2026-01-22 19:05:18 +00:00
cuda.py
fix: Add glm4_moe_lite to MLA detection (#32614)
2026-01-23 12:38:57 -08:00
interface.py
[Misc] Make mem utils can be reused by other platforms (#32322)
2026-01-14 03:46:01 -08:00
rocm.py
[ROCm][ViT] Enable Flash Attention Triton backend on RDNA3/RDNA4 (#32944)
2026-01-24 10:03:07 +08:00
tpu.py
[Refactor][TPU] Remove torch_xla path and use tpu-inference (#30808)
2026-01-07 16:07:16 +08:00
xpu.py
[1/N][Attention] Restructure attention: move files (#31916)
2026-01-09 13:10:24 -08:00
Powered by Gitea Version: 1.25.2 Page: 82ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API