[Bugfix][Model] Skip loading lm_head weights if using tie_word_embeddings (#6758)

Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
This commit is contained in:
Travis Johnson
2024-07-31 20:49:11 -06:00
committed by GitHub
parent 23993a7997
commit 630dd9e0ae
4 changed files with 22 additions and 1 deletions

View File

@@ -343,6 +343,11 @@ class OlmoForCausalLM(nn.Module):
# Models trained using ColossalAI may include these tensors in
# the checkpoint. Skip them.
continue
# With tie_word_embeddings, we can skip lm_head.weight
# The weight might appear unnecessarily in the files if the model is
# processed with quantization, LoRA, fine-tuning, etc.
if self.config.tie_word_embeddings and "lm_head.weight" in name:
continue
for (param_name, weight_name, shard_id) in stacked_params_mapping:
if weight_name not in name:
continue