[Kernel] Expand MoE weight loading + Add Fused Marlin MoE Kernel (#7527)

Co-authored-by: ElizaWszola <eliza@neuralmagic.com>
This commit is contained in:
Dipika Sikka
2024-08-21 19:17:10 -04:00
committed by GitHub
parent 5844017285
commit 8678a69ab5
15 changed files with 2375 additions and 85 deletions

View File

@@ -13,5 +13,7 @@ compressed-tensors, nm-testing/tinyllama-oneshot-w8a16-per-channel, main
compressed-tensors, nm-testing/Meta-Llama-3-8B-FP8-compressed-tensors-test, main
compressed-tensors, nm-testing/Phi-3-mini-128k-instruct-FP8, main
compressed-tensors, neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16, main
compressed-tensors, nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-quantized, main
compressed-tensors, nm-testing/Mixtral-8x7B-Instruct-v0.1-W4A16-channel-quantized, main
awq, casperhansen/mixtral-instruct-awq, main
awq_marlin, casperhansen/mixtral-instruct-awq, main