This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e
vllm
/
vllm
/
v1
/
core
History
Wei Wei
bf6a3d0ff5
[Misc] Add more scoping for improved trace (
#28329
)
...
Signed-off-by: Wei Wei <
wwei6@meta.com
>
2025-11-10 21:03:21 +00:00
..
sched
[Misc] Add more scoping for improved trace (
#28329
)
2025-11-10 21:03:21 +00:00
__init__.py
[V1] Implement vLLM V1 [1/N] (
#9289
)
2024-10-22 01:24:07 -07:00
block_pool.py
[BugFix][LoRA] use adapter_id instead of id field of lora_request (
#27728
)
2025-11-03 10:08:08 +08:00
encoder_cache_manager.py
[Misc] Simplify max tokens in multimodal registry (
#27500
)
2025-10-24 23:56:01 -07:00
kv_cache_coordinator.py
[Core] Reuse empty block lists whenever possible in KVCacheBlocks to mitigate GC costs (
#24964
)
2025-10-14 12:58:43 -07:00
kv_cache_manager.py
[Core][Perf] Only invoke save_new_computed_blocks when computed blocks are not empty (
#27799
)
2025-10-30 19:47:30 +00:00
kv_cache_utils.py
[Chore]:Extract math and argparse utilities to separate modules (
#27188
)
2025-10-26 04:03:32 -07:00
single_type_kv_cache_manager.py
[Core][Hybrid allocator + connector 2/n] Unify
remove_skipped_blocks
by
get_last_useful_token
(
#25431
)
2025-11-06 00:12:00 +00:00