This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
4dff91c93da668f4cca3f80aa3a94622d21c34fc
vllm
/
vllm
/
platforms
History
Kunshang Ji
7caec10e7b
[XPU]avoid circular import during XPU init (
#23017
)
...
Signed-off-by: Kunshang Ji <
kunshang.ji@intel.com
>
2025-08-16 05:16:34 +00:00
..
__init__.py
[TPU] Support Pathways in vLLM (
#21417
)
2025-07-30 10:02:12 -07:00
cpu.py
[gpt-oss] Enable gpt-oss on ampere (
#22714
)
2025-08-12 03:21:44 -07:00
cuda.py
[Core] Allow full cudagraph with separate attention routines and orthogonal to compilation, add support for FA2 and FlashInfer (
#20059
)
2025-08-15 10:01:39 -04:00
interface.py
[Core] Allow full cudagraph with separate attention routines and orthogonal to compilation, add support for FA2 and FlashInfer (
#20059
)
2025-08-15 10:01:39 -04:00
neuron.py
[Refactor]Abstract Platform Interface for Distributed Backend and Add xccl Support for Intel XPU (
#19410
)
2025-07-07 04:32:32 +00:00
rocm.py
[Core] Allow full cudagraph with separate attention routines and orthogonal to compilation, add support for FA2 and FlashInfer (
#20059
)
2025-08-15 10:01:39 -04:00
tpu.py
[Core] Allow full cudagraph with separate attention routines and orthogonal to compilation, add support for FA2 and FlashInfer (
#20059
)
2025-08-15 10:01:39 -04:00
xpu.py
[XPU]avoid circular import during XPU init (
#23017
)
2025-08-16 05:16:34 +00:00