This website requires JavaScript.
Explore
Help
Register
Sign In
biondizzle
/
vllm
Watch
1
Star
0
Fork
0
You've already forked vllm
Code
Issues
Pull Requests
Actions
2
Packages
Projects
Releases
Wiki
Activity
Files
7a7929abe8e2fd6a4688487c471a1ee1fde0edd2
vllm
/
cacheflow
/
models
/
llama.py
Woosuk Kwon
88c0268a18
Implement custom kernel for LLaMA rotary embedding (
#14
)
2023-03-30 11:04:21 -07:00
11 KiB
Raw
Blame
History
View Raw
Reference in New Issue
View Git Blame
Copy Permalink