Qubitium-ModelCloud
|
ee93f4f92a
|
[CORE] Quantized lm-head Framework (#4442)
Co-authored-by: Robert Shaw <rshaw@neuralmagic.com>
Co-authored-by: ZX <zx@lbx.dev>
|
2024-07-02 22:25:17 +00:00 |
|
Murali Andoorveedu
|
c5832d2ae9
|
[Core] Pipeline Parallel Support (#4412)
Signed-off-by: Muralidhar Andoorveedu <muralidhar.andoorveedu@centml.ai>
|
2024-07-02 10:58:08 -07:00 |
|
Calvinn Ng
|
767c727a81
|
fix DbrxFusedNormAttention missing cache_config (#5340)
Co-authored-by: team <calvinn.ng@ahrefs.com>
|
2024-06-07 14:10:21 -07:00 |
|
Cody Yu
|
a3a73ab069
|
[Misc] Load FP8 kv-cache scaling factors from checkpoints (#4893)
The 2nd PR for #4532.
This PR supports loading FP8 kv-cache scaling factors from a FP8 checkpoint (with .kv_scale parameter).
|
2024-05-22 13:28:20 -07:00 |
|
Woosuk Kwon
|
0fca3cdcf2
|
[Misc] Enhance attention selector (#4751)
|
2024-05-13 10:47:25 -07:00 |
|
Cody Yu
|
a62aaf1df5
|
[Misc][Refactor] Generalize linear_method to be quant_method (#4373)
|
2024-04-26 16:41:14 -04:00 |
|
Antoni Baum
|
69e1d2fb69
|
[Core] Refactor model loading code (#4097)
|
2024-04-16 11:34:39 -07:00 |
|
youkaichao
|
63e7176f26
|
[Core][Refactor] move parallel_utils into vllm/distributed (#3950)
[WIP][Core][Refactor] move vllm/model_executor/parallel_utils into vllm/distributed and vllm/device_communicators (#3950)
|
2024-04-10 15:33:30 -07:00 |
|
Megha Agarwal
|
e24336b5a7
|
[Model] Add support for DBRX (#3660)
|
2024-03-27 13:01:46 -07:00 |
|