This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
a1257fd1ea93da6e27b31e4739ac2707781d8ba7
vllm
/
vllm
/
v1
/
attention
History
grimulkan
a1257fd1ea
[Kernel] Add FP8 KV cache support to Triton MLA decode attention (
#34597
)
...
Signed-off-by: grimulkan <
grimulkan@gmail.com
>
2026-03-12 08:32:34 -07:00
..
backends
[Kernel] Add FP8 KV cache support to Triton MLA decode attention (
#34597
)
2026-03-12 08:32:34 -07:00
ops
[Kernel] Add FP8 KV cache support to Triton MLA decode attention (
#34597
)
2026-03-12 08:32:34 -07:00
__init__.py
[V1] Implement vLLM V1 [1/N] (
#9289
)
2024-10-22 01:24:07 -07:00
backend.py
[BUGFIX][Mamba][Qwen3.5] Zero freed SSM cache blocks on GPU (
#35219
)
2026-03-10 03:32:20 -07:00
selector.py
Reapply [Attention] Refactor
check_and_update_config
(
#35122
)
2026-03-09 07:17:14 -07:00