[Kernel][Backend][Model] Blocksparse flash attention kernel and Phi-3-Small model (#4799)

Co-authored-by: beagleski <yunanzhang@microsoft.com>
Co-authored-by: bapatra <bapatra@microsoft.com>
Co-authored-by: Barun Patra <codedecde@users.noreply.github.com>
Co-authored-by: Michael Goin <michael@neuralmagic.com>
This commit is contained in:
Eric Xihui Lin
2024-05-25 01:00:52 -04:00
committed by GitHub
parent e64fde4b01
commit 8e192ff967
23 changed files with 2445 additions and 87 deletions

View File

@@ -100,6 +100,7 @@ class OpenAIServing:
token_logprob = step_top_logprobs[token_id].logprob
token = step_top_logprobs[token_id].decoded_token
logprobs.tokens.append(token)
token_logprob = max(token_logprob, -9999.0)
logprobs.token_logprobs.append(token_logprob)
if num_output_top_logprobs: