[Doc][CI/Build] Update docs and tests to use vllm serve (#6431)
This commit is contained in:
@@ -1,9 +1,7 @@
|
||||
"""An example showing how to use vLLM to serve VLMs.
|
||||
|
||||
Launch the vLLM server with the following command:
|
||||
python -m vllm.entrypoints.openai.api_server \
|
||||
--model llava-hf/llava-1.5-7b-hf \
|
||||
--chat-template template_llava.jinja
|
||||
vllm serve llava-hf/llava-1.5-7b-hf --chat-template template_llava.jinja
|
||||
"""
|
||||
import base64
|
||||
|
||||
|
||||
Reference in New Issue
Block a user