This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
8ac3a4148796648d206a46144aa0dacea8977d55
vllm
/
vllm
/
model_executor
/
layers
/
mamba
History
Shanshan Shen
d44e9df7d4
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
...
Signed-off-by: shen-shanshan <
467638484@qq.com
>
2025-11-19 16:24:55 +00:00
..
ops
[Hybrid] [Kernel] Fix chunk scan kernel when BLOCK_SIZE_DSTATE > 128 (
#28295
)
2025-11-14 22:55:42 +00:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (
#7651
)
2024-08-28 15:06:52 -07:00
abstract.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
linear_attn.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
mamba_mixer2.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
mamba_mixer.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00
mamba_utils.py
[Model] Introduce Kimi Linear to vLLM (
#27809
)
2025-10-30 21:02:27 +08:00
short_conv.py
[Model][Mamba] Add selector for mamba attention backend and make it pluggable for other device (
#26487
)
2025-11-19 16:24:55 +00:00