This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
2b5bf20988edaab21621b78a9eb589edc93f2763
vllm
/
vllm
/
model_executor
History
Yongzao
2b5bf20988
[torch.compile] Adding torch compile annotations to some models (
#9876
)
...
Signed-off-by: youkaichao <
youkaichao@gmail.com
> Co-authored-by: youkaichao <
youkaichao@gmail.com
>
2024-11-01 00:25:47 -07:00
..
guided_decoding
[Frontend] Bad words sampling parameter (
#9717
)
2024-10-26 16:29:38 +00:00
layers
[Bugfix] Fix layer skip logic with bitsandbytes (
#9887
)
2024-11-01 13:12:44 +08:00
model_loader
[Model] Support math-shepherd-mistral-7b-prm model (
#9697
)
2024-10-30 09:33:42 -07:00
models
[torch.compile] Adding torch compile annotations to some models (
#9876
)
2024-11-01 00:25:47 -07:00
__init__.py
[Performance] Optimize e2e overheads: Reduce python allocations (
#7162
)
2024-08-08 21:34:28 -07:00
custom_op.py
[torch.compile] rework compile control with piecewise cudagraph (
#9715
)
2024-10-29 23:03:49 -07:00
parameter.py
[Kernel] (2/N) Machete - Integrate into CompressedTensorsWNA16 and GPTQMarlin (
#7701
)
2024-09-23 13:46:26 -04:00
pooling_metadata.py
[Model][Misc] Add e5-mistral-7b-instruct and Embedding API (
#3734
)
2024-05-11 11:30:37 -07:00
sampling_metadata.py
[Spec Decode] (1/2) Remove batch expansion (
#8839
)
2024-10-01 16:04:42 -07:00
utils.py
[Hardware] using current_platform.seed_everything (
#9785
)
2024-10-29 14:47:44 +00:00