This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
f1740006e47d580656668ba5a9253a4e4340e198
vllm
/
docs
/
serving
History
Walter Beller-Morales
061980c36a
[Feature][Frontend] add support for Cohere Embed v2 API (
#37074
)
...
Signed-off-by: walterbm <
walter.beller.morales@gmail.com
>
2026-03-16 19:55:53 -04:00
..
integrations
[Frontend] Exclude anthropic billing header to avoid prefix cache miss (
#36829
)
2026-03-12 01:20:34 +00:00
context_parallel_deployment.md
[Doc]: fixing multiple typos in diverse files (
#33256
)
2026-01-29 16:52:03 +08:00
data_parallel_deployment.md
[Docs] Clarify Expert Parallel behavior for attention and MoE layers (
#30615
)
2025-12-13 08:37:59 -09:00
distributed_troubleshooting.md
[Docs] Replace all explicit anchors with real links (
#27087
)
2025-10-17 02:22:06 -07:00
expert_parallel_deployment.md
[Kernel] Add FlashInfer MoE A2A Kernel (
#36022
)
2026-03-15 23:45:32 -07:00
offline_inference.md
[Docs] Replace all explicit anchors with real links (
#27087
)
2025-10-17 02:22:06 -07:00
openai_compatible_server.md
[Feature][Frontend] add support for Cohere Embed v2 API (
#37074
)
2026-03-16 19:55:53 -04:00
parallelism_scaling.md
[Dependency] Remove default ray dependency (
#36170
)
2026-03-08 20:06:22 -07:00