Logo
Explore Help
Register Sign In
biondizzle/vllm
1
0
Fork 0
You've already forked vllm
Code Issues Pull Requests Actions 2 Packages Projects Releases Wiki Activity
Files
c894836108732d0cbb6fce15aeda8de1218a380d
vllm/vllm
History
Andre Slavescu c894836108 [Model] Add support for GPT-J (#226)
Co-authored-by: woWoosuk Kwon <woosuk.kwon@berkeley.edu>
2023-07-08 17:55:16 -07:00
..
core
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
engine
Add trust-remote-code flag to handle remote tokenizers (#364)
2023-07-07 11:04:58 -07:00
entrypoints
Sort the outputs before return (#402)
2023-07-08 14:48:18 -07:00
model_executor
[Model] Add support for GPT-J (#226)
2023-07-08 17:55:16 -07:00
transformers_utils
Add trust_remote_code arg to get_config (#405)
2023-07-08 15:24:17 -07:00
worker
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
__init__.py
Bump up the version (#300)
2023-07-04 21:41:53 -07:00
block.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
config.py
Add trust_remote_code arg to get_config (#405)
2023-07-08 15:24:17 -07:00
logger.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
outputs.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
sampling_params.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
sequence.py
avoid python list copy in sequence initialization (#401)
2023-07-08 12:42:08 -07:00
utils.py
[Quality] Add code formatter and linter (#326)
2023-07-03 11:31:55 -07:00
Powered by Gitea Version: 1.25.2 Page: 30ms Template: 3ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API