Make distinct code and console admonitions so readers are less likely to miss them (#20585)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-07-08 03:55:28 +01:00
committed by GitHub
parent 31c5d0a1b7
commit af107d5a0e
52 changed files with 192 additions and 162 deletions

View File

@@ -24,7 +24,7 @@ sky check
See the vLLM SkyPilot YAML for serving, [serving.yaml](https://github.com/skypilot-org/skypilot/blob/master/llm/vllm/serve.yaml).
??? Yaml
??? code "Yaml"
```yaml
resources:
@@ -95,7 +95,7 @@ HF_TOKEN="your-huggingface-token" \
SkyPilot can scale up the service to multiple service replicas with built-in autoscaling, load-balancing and fault-tolerance. You can do it by adding a services section to the YAML file.
??? Yaml
??? code "Yaml"
```yaml
service:
@@ -111,7 +111,7 @@ SkyPilot can scale up the service to multiple service replicas with built-in aut
max_completion_tokens: 1
```
??? Yaml
??? code "Yaml"
```yaml
service:
@@ -186,7 +186,7 @@ vllm 2 1 xx.yy.zz.245 18 mins ago 1x GCP([Spot]{'L4': 1}) R
After the service is READY, you can find a single endpoint for the service and access the service with the endpoint:
??? Commands
??? console "Commands"
```bash
ENDPOINT=$(sky serve status --endpoint 8081 vllm)
@@ -220,7 +220,7 @@ service:
This will scale the service up to when the QPS exceeds 2 for each replica.
??? Yaml
??? code "Yaml"
```yaml
service:
@@ -285,7 +285,7 @@ sky serve down vllm
It is also possible to access the Llama-3 service with a separate GUI frontend, so the user requests send to the GUI will be load-balanced across replicas.
??? Yaml
??? code "Yaml"
```yaml
envs: