[Core]: Option To Use Prompt Token Ids Inside Logits Processor (#4985)

Co-authored-by: Elisei Smirnov <el.smirnov@innopolis.university>
This commit is contained in:
Elisei Smirnov
2024-05-24 01:04:24 +03:00
committed by GitHub
parent a1242324c9
commit e3470f8753
2 changed files with 24 additions and 8 deletions

View File

@@ -18,10 +18,14 @@ class SamplingType(IntEnum):
BEAM = 3
LogitsProcessor = Callable[[List[int], torch.Tensor], torch.Tensor]
"""LogitsProcessor is a function that takes a list of previously generated
tokens and a tensor of the logits for the next token, and returns a modified
tensor of logits to sample from."""
LogitsProcessor = Union[Callable[[List[int], torch.Tensor], torch.Tensor],
Callable[[List[int], List[int], torch.Tensor],
torch.Tensor]]
"""LogitsProcessor is a function that takes a list
of previously generated tokens, the logits tensor
for the next token and, optionally, prompt tokens as a
first argument, and returns a modified tensor of logits
to sample from."""
class SamplingParams:
@@ -95,7 +99,8 @@ class SamplingParams:
spaces_between_special_tokens: Whether to add spaces between special
tokens in the output. Defaults to True.
logits_processors: List of functions that modify logits based on
previously generated tokens.
previously generated tokens, and optionally prompt tokens as
a first argument.
truncate_prompt_tokens: If set to an integer k, will use only the last k
tokens from the prompt (i.e., left truncation). Defaults to None
(i.e., no truncation).