Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
9474e89ba4ecae253b585eb6b3e1d85f4e108f01
vllm/tests/lora
History
Antoni Baum fb96c1e98c Asynchronous tokenization (#2879)
2024-03-15 23:37:01 +00:00
..
__init__.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
conftest.py
Add distributed model executor abstraction (#3191)
2024-03-11 11:03:45 -07:00
test_gemma.py
Add LoRA support for Gemma (#3050)
2024-02-28 13:03:28 -08:00
test_layer_variation.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_layers.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_llama.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_lora_manager.py
Add LoRA support for Mixtral (#2831)
2024-02-14 00:55:45 +01:00
test_lora.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_mixtral.py
Re-enable the 80 char line width limit (#3305)
2024-03-10 19:49:14 -07:00
test_punica.py
Add missing kernel for CodeLlama-34B on A/H100 (no tensor parallelism) when using Multi-LoRA. (#3350)
2024-03-13 12:18:25 -07:00
test_tokenizer_group.py
Asynchronous tokenization (#2879)
2024-03-15 23:37:01 +00:00
test_utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
test_worker.py
Remove hardcoded device="cuda" to support more devices (#2503)
2024-02-01 15:46:39 -08:00
utils.py
[Experimental] Add multi-LoRA support (#1804)
2024-01-23 15:26:37 -08:00
Powered by Gitea Version: 1.25.2 Page: 58ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API