[Docs] Use 1-2-3 list for deploy steps in deployment/frameworks/ (#24633)

Signed-off-by: windsonsea <haifeng.yao@daocloud.io>
This commit is contained in:
Michael Yao
2025-09-11 16:50:12 +08:00
committed by GitHub
parent ba6011027d
commit d14c4ebf08
6 changed files with 98 additions and 90 deletions

View File

@@ -6,27 +6,31 @@ It allows you to deploy a large language model (LLM) server with vLLM as the bac
## Prerequisites
- Setup vLLM environment
Set up the vLLM environment:
```bash
pip install vllm
```
## Deploy
- Start the vLLM server with the supported chat completion model, e.g.
1. Start the vLLM server with the supported chat completion model, e.g.
```bash
vllm serve qwen/Qwen1.5-0.5B-Chat
```
```bash
vllm serve qwen/Qwen1.5-0.5B-Chat
```
- Download and install [Chatbox desktop](https://chatboxai.app/en#download).
1. Download and install [Chatbox desktop](https://chatboxai.app/en#download).
- On the bottom left of settings, Add Custom Provider
1. On the bottom left of settings, Add Custom Provider
- API Mode: `OpenAI API Compatible`
- Name: vllm
- API Host: `http://{vllm server host}:{vllm server port}/v1`
- API Path: `/chat/completions`
- Model: `qwen/Qwen1.5-0.5B-Chat`
![](../../assets/deployment/chatbox-settings.png)
![](../../assets/deployment/chatbox-settings.png)
- Go to `Just chat`, and start to chat:
1. Go to `Just chat`, and start to chat:
![](../../assets/deployment/chatbox-chat.png)
![](../../assets/deployment/chatbox-chat.png)