This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
130d6c9514856cb5a152329f0382d60ff6e8d97e
vllm
/
vllm
/
v1
/
attention
History
Pleaplusone
130d6c9514
[ROCm][Perf] Enable shuffle kv cache layout and assembly paged attention kernel for
AiterFlashAttentionBackend
(
#29887
)
...
Signed-off-by: ganyi <
ygan@amd.com
>
2026-01-15 15:29:53 +00:00
..
backends
[ROCm][Perf] Enable shuffle kv cache layout and assembly paged attention kernel for
AiterFlashAttentionBackend
(
#29887
)
2026-01-15 15:29:53 +00:00
ops
[Bugfix] Fix missing scale passing for encoder Triton Attention implementation (
#32149
)
2026-01-12 11:13:41 +00:00
__init__.py
[V1] Implement vLLM V1 [1/N] (
#9289
)
2024-10-22 01:24:07 -07:00
backend.py
[6/N][Attention] Move utils to more appropriate locations (
#32215
)
2026-01-13 05:38:52 -08:00
selector.py
[1/N][Attention] Restructure attention: move files (
#31916
)
2026-01-09 13:10:24 -08:00