Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
4c16ba617f76b342dd0e62deba1f96ed6cee74fa
vllm/vllm/v1/core/sched
History
Or Ozeri 028599739d [BugFix] scheduler: Fix resuming of preempted requests after async load (#31583)
Signed-off-by: Or Ozeri <oro@il.ibm.com>
2026-01-10 12:39:25 -08:00
..
__init__.py
[V1] Scheduler Refactoring [1/N] - Add Scheduler Interface (#15250)
2025-03-20 17:50:43 -07:00
async_scheduler.py
[Perf] Async Scheduling + Speculative Decoding + Structured Outputs (#29821)
2026-01-06 18:50:37 +00:00
interface.py
[Perf] Async Scheduling + Speculative Decoding + Structured Outputs (#29821)
2026-01-06 18:50:37 +00:00
output.py
[Feature] Add iteration level logging and enhance nvtx marker (#31193)
2026-01-09 00:13:39 +00:00
request_queue.py
[Bugfix] fix --scheduling-policy=priority & n>1 crashes engine (#29764)
2025-12-02 22:42:28 +00:00
scheduler.py
[BugFix] scheduler: Fix resuming of preempted requests after async load (#31583)
2026-01-10 12:39:25 -08:00
utils.py
[Scheduer] Simplify stop checking for pooling models (#30591)
2025-12-13 09:45:26 +00:00
Powered by Gitea Version: 1.25.2 Page: 486ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API