Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
00b31a36a2d0de6d197a473280b2304d482714af
vllm/vllm/model_executor/layers/mamba
History
Asaf Joseph Gardin 00b31a36a2 [V1] [Hybrid] Mamba1 Automatic Prefix Caching (#26377)
Signed-off-by: asafg <39553475+Josephasafg@users.noreply.github.com>
2025-11-02 04:16:23 -08:00
..
ops
[V1] [Hybrid] Mamba1 Automatic Prefix Caching (#26377)
2025-11-02 04:16:23 -08:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (#7651)
2024-08-28 15:06:52 -07:00
abstract.py
[Misc] Refactor get_kv_cache_spec into AttentionLayerBase (#26587)
2025-10-18 13:51:21 +00:00
linear_attn.py
Fix MiniMax-M2 rmsnorm precision and remove useless code (#27627)
2025-10-29 21:01:05 +08:00
mamba_mixer2.py
[Chore] Clean up pytorch helper functions in vllm.utils (#26908)
2025-10-18 09:48:22 -07:00
mamba_mixer.py
[V1] [Hybrid] Mamba1 Automatic Prefix Caching (#26377)
2025-11-02 04:16:23 -08:00
mamba_utils.py
[Model] Introduce Kimi Linear to vLLM (#27809)
2025-10-30 21:02:27 +08:00
short_conv.py
[Chore] Clean up pytorch helper functions in vllm.utils (#26908)
2025-10-18 09:48:22 -07:00
Powered by Gitea Version: 1.25.2 Page: 4523ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API