This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
82eb61dd4c4e306bda4f20edab063693396c4e1a
vllm
/
tests
/
v1
/
core
History
rongfu.leng
4716377fbc
[Feature] Estimate max-model-len use available KV cache memory (
#16168
)
...
Signed-off-by: rongfu.leng <
rongfu.leng@daocloud.io
>
2025-04-08 19:12:51 -07:00
..
test_kv_cache_utils.py
[Feature] Estimate max-model-len use available KV cache memory (
#16168
)
2025-04-08 19:12:51 -07:00
test_prefix_caching.py
[V1] Implement sliding window attention in kv_cache_manager (
#14097
)
2025-04-01 00:33:17 -07:00
test_scheduler_e2e.py
[V1] Support long_prefill_token_threshold in v1 scheduler (
#15419
)
2025-03-25 14:22:26 -07:00
test_scheduler.py
[V1] Add
disable_chunked_mm_input
arg to disable partial mm input prefill (
#15837
)
2025-04-07 23:24:07 -07:00
test_specialized_manager.py
[V1] Implement sliding window attention in kv_cache_manager (
#14097
)
2025-04-01 00:33:17 -07:00