Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
a698e8e7ad4bd06c0197bd79c9c200bef71be189
vllm/vllm/distributed
History
Fadi Arafeh 744ef30484 [CPU Backend] [Perf] Accelerate tensor-parallel/data-parallel inference across NUMA domains on Arm (#32792)
Signed-off-by: Fadi Arafeh <fadi.arafeh@arm.com>
2026-01-22 18:55:23 +00:00
..
device_communicators
[CPU Backend] [Perf] Accelerate tensor-parallel/data-parallel inference across NUMA domains on Arm (#32792)
2026-01-22 18:55:23 +00:00
ec_transfer
[EC Connector] Optimize remote cache check in scheduler (#32585)
2026-01-22 03:30:59 +00:00
eplb
[MoE Refactor] Separate Router into OO Classes (#30623)
2026-01-18 11:40:49 -05:00
kv_transfer
Enable Cross layers KV cache layout at NIXL Connector (#30207)
2026-01-22 10:12:58 +00:00
__init__.py
[Misc] Add SPDX-FileCopyrightText (#19100)
2025-06-03 11:20:17 -07:00
communication_op.py
Update Optional[x] -> x | None and Union[x, y] to x | y (#26633)
2025-10-12 09:51:31 -07:00
kv_events.py
[Prefix Cache] Include lora_name in BlockStored event for deterministic KV-cache reconstruction (#27577)
2025-12-30 00:17:16 +00:00
parallel_state.py
[misc] Remove is_torch_equal_or_newer(2.4) cases (#32296)
2026-01-13 23:22:07 -08:00
utils.py
[UX] Suppress gloo log spam (#29250)
2025-11-25 17:19:35 -08:00
Powered by Gitea Version: 1.25.2 Page: 18ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API