Isotr0py
|
2dd34371a6
|
[Bugfix] Fix RMSNorm forward in InternViT attention qk_layernorm (#6992)
|
2024-08-01 12:00:28 -07:00 |
|
Travis Johnson
|
630dd9e0ae
|
[Bugfix][Model] Skip loading lm_head weights if using tie_word_embeddings (#6758)
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
|
2024-07-31 19:49:11 -07:00 |
|
xuyi
|
1d2e7fb73f
|
[Model] Pipeline parallel support for Qwen2 (#6924)
|
2024-07-31 18:49:51 -07:00 |
|
Michael Goin
|
460c1884e3
|
[Bugfix] Support cpu offloading with fp8 quantization (#6960)
|
2024-07-31 12:47:46 -07:00 |
|
Avshalom Manevich
|
2ee8d3ba55
|
[Model] use FusedMoE layer in Jamba (#6935)
|
2024-07-31 12:00:24 -07:00 |
|
Cyrus Leung
|
daed30c4a9
|
[Bugfix] Fix feature size calculation for LLaVA-NeXT (#6982)
|
2024-07-31 23:46:17 +08:00 |
|
Alphi
|
2f4e108f75
|
[Bugfix] Clean up MiniCPM-V (#6939)
Co-authored-by: hezhihui <hzh7269@modelbest.cn>
Co-authored-by: Cyrus Leung <cyrus.tl.leung@gmail.com>
|
2024-07-31 14:39:19 +00:00 |
|
Roger Wang
|
c66c7f86ac
|
[Bugfix] Fix PaliGemma MMP (#6930)
|
2024-07-30 02:20:57 -07:00 |
|
Isotr0py
|
7cbd9ec7a9
|
[Model] Initialize support for InternVL2 series models (#6514)
Co-authored-by: Roger Wang <ywang@roblox.com>
|
2024-07-29 10:16:30 +00:00 |
|
Cyrus Leung
|
1ad86acf17
|
[Model] Initial support for BLIP-2 (#5920)
Co-authored-by: ywang96 <ywang@roblox.com>
|
2024-07-27 11:53:07 +00:00 |
|
tomeras91
|
ed94e4f427
|
[Bugfix][Model] Jamba assertions and no chunked prefill by default for Jamba (#6784)
|
2024-07-26 20:45:31 -07:00 |
|
Michael Goin
|
07278c37dd
|
[Model] Support Nemotron models (Nemotron-3, Nemotron-4, Minitron) (#6611)
|
2024-07-26 14:33:42 -04:00 |
|
Alphi
|
9e169a4c61
|
[Model] Adding support for MiniCPM-V (#4087)
|
2024-07-24 20:59:30 -07:00 |
|
Roger Wang
|
0a740a11ba
|
[Bugfix] Fix token padding for chameleon (#6724)
|
2024-07-24 01:05:09 -07:00 |
|
Roger Wang
|
1bedf210e3
|
Bump transformers version for Llama 3.1 hotfix and patch Chameleon (#6690)
|
2024-07-23 13:47:48 -07:00 |
|
Travis Johnson
|
507ef787d8
|
[Model] Pipeline Parallel Support for DeepSeek v2 (#6519)
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
|
2024-07-23 12:22:09 -07:00 |
|
Roger Wang
|
22fa2e35cb
|
[VLM][Model] Support image input for Chameleon (#6633)
|
2024-07-22 23:50:48 -07:00 |
|
Michael Goin
|
9e0b558a09
|
[Misc] Support FP8 kv cache scales from compressed-tensors (#6528)
|
2024-07-23 04:11:50 +00:00 |
|
Jae-Won Chung
|
89c1c6a196
|
[Bugfix] Fix vocab_size field access in llava_next.py (#6624)
|
2024-07-22 05:02:51 +00:00 |
|
Roger Wang
|
c9eef37f32
|
[Model] Initial Support for Chameleon (#5770)
|
2024-07-21 17:37:51 -07:00 |
|
Isotr0py
|
25e778aa16
|
[Model] Refactor and decouple phi3v image embedding (#6621)
|
2024-07-21 16:07:58 -07:00 |
|
Matt Wong
|
06d6c5fe9f
|
[Bugfix][CI/Build][Hardware][AMD] Fix AMD tests, add HF cache, update CK FA, add partially supported model notes (#6543)
|
2024-07-20 09:39:07 -07:00 |
|
Antoni Baum
|
9ed82e7074
|
[Misc] Small perf improvements (#6520)
|
2024-07-19 12:10:56 -07:00 |
|
Robert Shaw
|
dbe5588554
|
[ Misc ] non-uniform quantization via compressed-tensors for Llama (#6515)
|
2024-07-18 22:39:18 -04:00 |
|
Michael Goin
|
15c6a079b1
|
[Model] Support Mistral-Nemo (#6548)
|
2024-07-18 20:31:50 +00:00 |
|
youkaichao
|
1c27d25fb5
|
[core][model] yet another cpu offload implementation (#6496)
Co-authored-by: Michael Goin <michael@neuralmagic.com>
|
2024-07-17 20:54:35 -07:00 |
|
Cody Yu
|
b5af8c223c
|
[Model] Pipeline parallel support for Mixtral (#6516)
|
2024-07-17 19:26:04 -07:00 |
|
Wushi Dong
|
1d094fd7c0
|
[Distributed][PP] only create embedding & lm head when necessary (#6455)
original title: [Distributed][Model] Rank-based Component Creation for Pipeline Parallelism Memory Optimization
|
2024-07-16 19:20:26 -07:00 |
|
Michael Goin
|
978aed5300
|
[Kernel][Attention] Separate Attention.kv_scale into k_scale and v_scale (#6081)
|
2024-07-16 15:31:32 -07:00 |
|
Mor Zusman
|
9ad32dacd9
|
[BugFix][Model] Jamba - Handle aborted requests, Add tests and fix cleanup bug (#6425)
Co-authored-by: Mor Zusman <morz@ai21.com>
|
2024-07-16 01:32:55 +00:00 |
|
youkaichao
|
4cf256ae7f
|
[misc][distributed] fix pp missing layer condition (#6446)
Create Release / Create Release (push) Has been cancelled
Create Release / Build Wheel (11.8, ubuntu-20.04, 3.10, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (11.8, ubuntu-20.04, 3.11, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (11.8, ubuntu-20.04, 3.8, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (11.8, ubuntu-20.04, 3.9, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (12.1, ubuntu-20.04, 3.10, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (12.1, ubuntu-20.04, 3.11, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (12.1, ubuntu-20.04, 3.8, 2.3.1) (push) Has been cancelled
Create Release / Build Wheel (12.1, ubuntu-20.04, 3.9, 2.3.1) (push) Has been cancelled
|
2024-07-15 10:32:35 -07:00 |
|
Roger Wang
|
6ae1597ddf
|
[VLM] Minor space optimization for ClipVisionModel (#6436)
|
2024-07-15 17:29:51 +08:00 |
|
youkaichao
|
69672f116c
|
[core][distributed] simplify code to support pipeline parallel (#6406)
|
2024-07-14 21:20:51 -07:00 |
|
Isotr0py
|
540c0368b1
|
[Model] Initialize Fuyu-8B support (#3924)
Co-authored-by: Roger Wang <ywang@roblox.com>
|
2024-07-14 05:27:14 +00:00 |
|
Robert Shaw
|
fb6af8bc08
|
[ Misc ] Apply MoE Refactor to Deepseekv2 To Support Fp8 (#6417)
|
2024-07-13 20:03:58 -07:00 |
|
Cyrus Leung
|
024ad87cdc
|
[Bugfix] Fix dtype mismatch in PaliGemma (#6367)
|
2024-07-12 08:22:18 -07:00 |
|
xwjiang2010
|
1df43de9bb
|
[bug fix] Fix llava next feature size calculation. (#6339)
Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
|
2024-07-11 17:21:10 +00:00 |
|
Thomas Parnell
|
8a1415cf77
|
[Bugfix] GPTBigCodeForCausalLM: Remove lm_head from supported_lora_modules. (#6326)
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Co-authored-by: Travis Johnson <tsjohnso@us.ibm.com>
|
2024-07-11 07:05:59 -07:00 |
|
Thomas Parnell
|
c38eba3046
|
[Bugfix] MLPSpeculator: Use ParallelLMHead in tie_weights=False case. (#6303)
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
|
2024-07-10 09:04:07 -04:00 |
|
Woosuk Kwon
|
e72ae80b06
|
[Bugfix] Support 2D input shape in MoE layer (#6287)
|
2024-07-10 09:03:16 -04:00 |
|
Abhinav Goyal
|
2416b26e11
|
[Speculative Decoding] Medusa Implementation with Top-1 proposer (#4978)
|
2024-07-09 18:34:02 -07:00 |
|
tomeras91
|
ddc369fba1
|
[Bugfix] Mamba cache Cuda Graph padding (#6214)
|
2024-07-08 11:25:51 -07:00 |
|
Roger Wang
|
6206dcb29e
|
[Model] Add PaliGemma (#5189)
Co-authored-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
|
2024-07-07 09:25:50 +08:00 |
|
Cyrus Leung
|
ea4b570483
|
[VLM] Cleanup validation and update docs (#6149)
|
2024-07-05 05:49:38 +00:00 |
|
Roger Wang
|
a41357e941
|
[VLM] Improve consistency between feature size calculation and dummy data for profiling (#6146)
|
2024-07-05 09:29:47 +08:00 |
|
Cyrus Leung
|
ae96ef8fbd
|
[VLM] Calculate maximum number of multi-modal tokens by model (#6121)
|
2024-07-04 16:37:23 -07:00 |
|
Lily Liu
|
69ec3ca14c
|
[Kernel][Model] logits_soft_cap for Gemma2 with flashinfer (#6051)
Co-authored-by: Simon Mo <simon.mo@hey.com>
|
2024-07-04 16:35:51 -07:00 |
|
xwjiang2010
|
d9e98f42e4
|
[vlm] Remove vision language config. (#6089)
Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Co-authored-by: Roger Wang <ywang@roblox.com>
|
2024-07-03 22:14:16 +00:00 |
|
Roger Wang
|
7cd2ebb025
|
[Bugfix] Fix compute_logits in Jamba (#6093)
|
2024-07-03 00:32:35 -07:00 |
|
Cyrus Leung
|
9831aec49f
|
[Core] Dynamic image size support for VLMs (#5276)
Signed-off-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Co-authored-by: Xiaowei Jiang <xwjiang2010@gmail.com>
Co-authored-by: ywang96 <ywang@roblox.com>
Co-authored-by: xwjiang2010 <87673679+xwjiang2010@users.noreply.github.com>
Co-authored-by: Roger Wang <136131678+ywang96@users.noreply.github.com>
|
2024-07-02 20:34:00 -07:00 |
|