This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
dd6ac1c2bb3d29f8ba612a2f66f350a2c55c7e8b
vllm
/
vllm
/
model_executor
/
layers
/
mamba
History
Thomas Parnell
e0c910bb89
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
...
Signed-off-by: Thomas Parnell <
tpa@zurich.ibm.com
>
2025-11-14 22:55:42 +00:00
..
ops
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
2025-11-14 22:55:42 +00:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (
#7651
)
2024-08-28 15:06:52 -07:00
abstract.py
[Misc] Refactor
get_kv_cache_spec
into
AttentionLayerBase
(
#26587
)
2025-10-18 13:51:21 +00:00
linear_attn.py
Fix MiniMax-M2 rmsnorm precision and remove useless code (
#27627
)
2025-10-29 21:01:05 +08:00
mamba_mixer2.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00
mamba_mixer.py
[V1] [Hybrid] Mamba1 Automatic Prefix Caching (
#26377
)
2025-11-02 04:16:23 -08:00
mamba_utils.py
[Model] Introduce Kimi Linear to vLLM (
#27809
)
2025-10-30 21:02:27 +08:00
short_conv.py
[Chore] Clean up pytorch helper functions in
vllm.utils
(
#26908
)
2025-10-18 09:48:22 -07:00