[Hardware] Replace torch.cuda.empty_cache with torch.accelerator.empty_cache (#30681)
Signed-off-by: Kunshang Ji <kunshang.ji@intel.com> Signed-off-by: Kunshang Ji <jikunshang95@gmail.com> Co-authored-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
@@ -769,7 +769,7 @@ def test_mixtral_moe(
|
||||
requires_grad=False,
|
||||
)
|
||||
torch.cuda.synchronize()
|
||||
torch.cuda.empty_cache()
|
||||
torch.accelerator.empty_cache()
|
||||
|
||||
# FIXME (zyongye) fix this after we move self.kernel
|
||||
# assignment in FusedMoE.__init__
|
||||
|
||||
Reference in New Issue
Block a user