Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
852ef5b4f5481ce526c804ea234d1de0df91f48d
vllm/vllm/model_executor
History
Zhuohan Li c957c741d9 Enable safetensors loading for all models (#974)
2023-09-07 15:49:52 -07:00
..
layers
[BugFix] Implement RoPE for GPT-J (#941)
2023-09-06 11:54:33 +09:00
models
Enable safetensors loading for all models (#974)
2023-09-07 15:49:52 -07:00
parallel_utils
Add Falcon support (new) (#592)
2023-08-02 14:04:39 -07:00
__init__.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
input_metadata.py
Add support for BLOOM (#331)
2023-07-03 13:12:35 -07:00
model_loader.py
Enable safetensors loading for all models (#974)
2023-09-07 15:49:52 -07:00
utils.py
Change the name to vLLM (#150)
2023-06-17 03:07:40 -07:00
weight_utils.py
Enable safetensors loading for all models (#974)
2023-09-07 15:49:52 -07:00
Powered by Gitea Version: 1.25.2 Page: 49ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API