Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
61e20828da1639c05a7bb7d1592c4834e10b33b7
vllm/vllm/v1/attention/backends
History
Cyrus Leung e8cc53af5e [Misc] Log the reason for falling back to FlexAttention (#20699)
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
2025-07-14 04:16:51 -07:00
..
mla
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
__init__.py
[V1] Implement vLLM V1 [1/N] (#9289)
2024-10-22 01:24:07 -07:00
cpu_attn.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
flash_attn.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
flashinfer.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
flex_attention.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
mamba_attn.py
[V1] Enable Mamba2 layers other than MambaMixer2 in the v1 engine (#20660)
2025-07-11 05:53:31 +00:00
pallas.py
[TPU] Temporary fix vmem oom for long model len by reducing page size (#20278)
2025-07-08 05:16:16 +00:00
rocm_aiter_fa.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
triton_attn.py
[Misc] Log the reason for falling back to FlexAttention (#20699)
2025-07-14 04:16:51 -07:00
utils.py
[Core] Add Flashinfer TRTLLM Backend for Flashinfer decode path (SM100). (#19825)
2025-07-11 09:23:23 +00:00
Powered by Gitea Version: 1.25.2 Page: 289ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API