This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
73cfb3c5eeb8b00a6e222751a28fd89a5f6229dc
vllm
/
vllm
/
attention
History
Wentao Ye
b42566f440
[Bug] Fix
is_flashmla_supported
Check Error (
#24774
)
...
Signed-off-by: yewentao256 <
zhyanwentao@126.com
>
2025-09-15 20:10:55 -06:00
..
backends
[Bug] Fix
is_flashmla_supported
Check Error (
#24774
)
2025-09-15 20:10:55 -06:00
layers
[Bugfix] Fix incorrect import of CacheConfig (
#24631
)
2025-09-11 01:48:25 -07:00
ops
[torch.compile][ROCm][V1] Enable attention output FP8 fusion for V1 attention backends (
#19767
)
2025-09-10 13:59:55 -07:00
utils
[Attention] FlashAttn MLA (
#14258
)
2025-09-04 02:47:59 -07:00
__init__.py
Remove duplicate entry in vllm.attention.__all__ (
#23296
)
2025-08-20 17:14:59 -07:00
layer.py
[USAGE] Improve error handling for weight initialization in Unquantized… (
#20321
)
2025-09-15 16:45:49 +00:00
selector.py
[gpt-oss] Enable gpt-oss on ampere (
#22714
)
2025-08-12 03:21:44 -07:00