This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
5e1a373d2e62c04ba464c88303600839d6973365
vllm
/
vllm
/
compilation
/
passes
History
elvischenv
7d6abdd022
[Fix] Use torch.empty for output in attention+quant fusion (
#31785
)
...
Signed-off-by: elvischenv <
219235043+elvischenv@users.noreply.github.com
>
2026-03-10 21:26:14 -07:00
..
fusion
[Fix] Use torch.empty for output in attention+quant fusion (
#31785
)
2026-03-10 21:26:14 -07:00
utility
[ROCm]: fix aiter rope functionalization (
#35533
)
2026-02-27 22:42:30 +00:00
__init__.py
[torch.compile] Reorganize vllm/compilation and tests/compile (0/N for vLLM IR) (
#33731
)
2026-02-06 04:19:49 -08:00
fx_utils.py
[torch.compile] Reorganize vllm/compilation and tests/compile (0/N for vLLM IR) (
#33731
)
2026-02-06 04:19:49 -08:00
inductor_pass.py
[torch.compile] Reorganize vllm/compilation and tests/compile (0/N for vLLM IR) (
#33731
)
2026-02-06 04:19:49 -08:00
pass_manager.py
[ROCm] AITER fused RoPE+KVCache (
#33443
)
2026-02-23 19:06:00 -08:00
vllm_inductor_pass.py
[torch.compile] Reorganize vllm/compilation and tests/compile (0/N for vLLM IR) (
#33731
)
2026-02-06 04:19:49 -08:00