[Bugfix] support tie_word_embeddings for all models (#5724)
This commit is contained in:
@@ -313,7 +313,7 @@ class LlavaForConditionalGeneration(nn.Module, SupportsMultiModal):
|
||||
278, 2793, 310, 278, 1967, 29973, 13, 22933, 9047, 13566, 29901]`.
|
||||
|
||||
To reserve space in KV cache, we have to insert placeholder tokens
|
||||
before they are inputted to the model, so the input processor prepends
|
||||
before they are inputted to the model, so the input processor prepends
|
||||
additional image tokens (denoted as `32000`), resulting in:
|
||||
`[1, 3148, 1001, 29901, 29871, 32000, ..., 32000, 29871, 13, 5618,
|
||||
29915, 29879, 278, 2793, 310, 278, 1967, 29973, 13, 22933, 9047, 13566,
|
||||
@@ -331,7 +331,7 @@ class LlavaForConditionalGeneration(nn.Module, SupportsMultiModal):
|
||||
input_ids: Flattened (concatenated) input_ids corresponding to a
|
||||
batch.
|
||||
pixel_values: The pixels in each input image.
|
||||
|
||||
|
||||
See also:
|
||||
:class:`LlavaImageInputs`
|
||||
"""
|
||||
|
||||
Reference in New Issue
Block a user