[Misc][Doc] Add Example of using OpenAI Server with VLM (#5832)

This commit is contained in:
Roger Wang
2024-06-25 20:34:25 -07:00
committed by GitHub
parent dda4811591
commit 3aa7b6cf66
3 changed files with 101 additions and 3 deletions

View File

@@ -130,6 +130,8 @@ To consume the server, you can use the OpenAI client like in the example below:
)
print("Chat response:", chat_response)
A full code example can be found in `examples/openai_vision_api_client.py <https://github.com/vllm-project/vllm/blob/main/examples/openai_vision_api_client.py>`_.
.. note::
By default, the timeout for fetching images through http url is ``5`` seconds. You can override this by setting the environment variable: