Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
7fcb705b80176fbdb92dd94f27f566c876d8b8a9
vllm/vllm/model_executor/layers/mamba
History
whx ce9b3cd3e9 [PluggableLayer][3/N] Apply PluggableLayer to mamba layers. (#33660)
Signed-off-by: whx-sjtu <2952154980@qq.com>
2026-02-07 05:26:05 -08:00
..
ops
[Performance] Tune Mamba selective scan kernel for B200 (#32873)
2026-01-26 05:56:54 -08:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (#7651)
2024-08-28 15:06:52 -07:00
abstract.py
[V1][Hybrid] Mamba Prefix Caching with align mode (#30877)
2026-01-23 09:56:48 -08:00
linear_attn.py
[1/N][Attention] Restructure attention: move files (#31916)
2026-01-09 13:10:24 -08:00
mamba_mixer2.py
[PluggableLayer][3/N] Apply PluggableLayer to mamba layers. (#33660)
2026-02-07 05:26:05 -08:00
mamba_mixer.py
[PluggableLayer][3/N] Apply PluggableLayer to mamba layers. (#33660)
2026-02-07 05:26:05 -08:00
mamba_utils.py
[PERF] Change GDN Attention State Layout from [N, HV, K, V] to [N, HV, V, K] (#33291)
2026-02-04 11:20:52 +00:00
short_conv.py
[1/N][Attention] Restructure attention: move files (#31916)
2026-01-09 13:10:24 -08:00
Powered by Gitea Version: 1.25.2 Page: 580ms Template: 9ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API