This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
092ace9e3a21f90c9f4aba8defe69ecff4bab628
vllm
/
vllm
/
model_executor
/
layers
/
mamba
History
Yan Ma
894843eb25
replace
with torch.cuda.device
with
with torch.accelerator.device_index
(
#36144
)
...
Signed-off-by: Yan Ma <
yan.ma@intel.com
>
2026-03-11 23:12:57 -07:00
..
ops
replace
with torch.cuda.device
with
with torch.accelerator.device_index
(
#36144
)
2026-03-11 23:12:57 -07:00
__init__.py
[Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (
#7651
)
2024-08-28 15:06:52 -07:00
abstract.py
[Model][Spec Decode] Nemotron-H MTP and Mamba Speculative Decoding Support (
#33726
)
2026-02-24 09:49:56 -08:00
linear_attn.py
[Model] Ring 2.5 (
#35102
)
2026-02-26 02:17:11 -08:00
mamba_mixer2.py
[Model][Spec Decode] Nemotron-H MTP and Mamba Speculative Decoding Support (
#33726
)
2026-02-24 09:49:56 -08:00
mamba_mixer.py
[Mamba1] - Kernel Level Chunk Alignment for Prefix Caching (
#34798
)
2026-03-01 20:40:23 +08:00
mamba_utils.py
[Deprecation] Deprecate code in 0.17 as scheduled (
#35441
)
2026-02-28 17:32:37 +00:00
short_conv.py
[Model][Spec Decode] Nemotron-H MTP and Mamba Speculative Decoding Support (
#33726
)
2026-02-24 09:49:56 -08:00