Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
030fc4491465d361e4bed626d76c184f8a7d8a07
vllm/vllm/model_executor/layers/mamba
History
Asaf Joseph Gardin 34916ae37f [Mamba] - Consolidate Mambas Attention Logic (#28133)
2025-12-23 21:57:00 +01:00
..
ops
Add SpecDec support to selective_state_update (#29488)
2025-12-08 16:45:18 -05:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (#7651)
2024-08-28 15:06:52 -07:00
abstract.py
[Attention] Update attention imports (#29540)
2025-11-27 11:19:09 -05:00
linear_attn.py
[Attention] Remove imports from vllm/attention/__init__.py (#29342)
2025-11-26 10:53:15 -07:00
mamba_mixer2.py
[V0 deprecation] Remove more V0 references (#29088)
2025-11-21 11:56:59 +00:00
mamba_mixer.py
[Attention][CUDAGraph] Remove CG padding from attention backends (#29352)
2025-12-02 13:48:08 -05:00
mamba_utils.py
[Model] Introduce Kimi Linear to vLLM (#27809)
2025-10-30 21:02:27 +08:00
short_conv.py
[Mamba] - Consolidate Mambas Attention Logic (#28133)
2025-12-23 21:57:00 +01:00
Powered by Gitea Version: 1.25.2 Page: 519ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API