[BugFix][LoRA] TritonExperts is ModularMoEPath for FP8 models (#33393)

Signed-off-by: Danielle Robinson <dmmaddix@amazon.com>
Co-authored-by: Danielle Robinson <dmmaddix@amazon.com>
This commit is contained in:
Danielle Robinson
2026-01-30 07:27:42 -08:00
committed by GitHub
parent 8f5d51203b
commit 74898a7015

View File

@@ -143,9 +143,7 @@ class FusedMoEWithLoRA(BaseLayerWithLoRA):
m_fused_moe_fn.fused_experts, (MarlinExperts, UnfusedOAITritonExperts)
)
else:
assert isinstance(
m_fused_moe_fn.fused_experts, (MarlinExperts, TritonExperts)
)
assert isinstance(m_fused_moe_fn.fused_experts, TritonExperts)
def fwd_decorator(layer, func):
def wrapper(*args, **kwargs):