Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
04a9e064db4dcf57519f1333796ba7face46248b
vllm/vllm/platforms
History
Matthew Bonanni 1a1fc3bbc0 [Attention][MLA] Make FLASHINFER_MLA the default MLA backend on Blackwell, and TRTLLM the default prefill (#32615)
Signed-off-by: Matthew Bonanni <mbonanni@redhat.com>
Co-authored-by: Wentao Ye <44945378+yewentao256@users.noreply.github.com>
2026-01-19 18:41:34 -05:00
..
__init__.py
[TPU] Rename path to tpu platform (#28452)
2025-11-11 19:16:47 +00:00
cpu.py
[1/N][Attention] Restructure attention: move files (#31916)
2026-01-09 13:10:24 -08:00
cuda.py
[Attention][MLA] Make FLASHINFER_MLA the default MLA backend on Blackwell, and TRTLLM the default prefill (#32615)
2026-01-19 18:41:34 -05:00
interface.py
[Misc] Make mem utils can be reused by other platforms (#32322)
2026-01-14 03:46:01 -08:00
rocm.py
AMD CI Test - unskip moe_sum test and moe_align_block_size tests (#32039)
2026-01-13 23:25:10 -08:00
tpu.py
[Refactor][TPU] Remove torch_xla path and use tpu-inference (#30808)
2026-01-07 16:07:16 +08:00
xpu.py
[1/N][Attention] Restructure attention: move files (#31916)
2026-01-09 13:10:24 -08:00
Powered by Gitea Version: 1.25.2 Page: 16ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API