2025-05-23 11:09:53 +02:00
|
|
|
---
|
|
|
|
|
title: Modal
|
|
|
|
|
---
|
|
|
|
|
[](){ #deployment-modal }
|
2025-01-09 15:26:37 -08:00
|
|
|
|
|
|
|
|
vLLM can be run on cloud GPUs with [Modal](https://modal.com), a serverless computing platform designed for fast auto-scaling.
|
|
|
|
|
|
|
|
|
|
For details on how to deploy vLLM on Modal, see [this tutorial in the Modal documentation](https://modal.com/docs/examples/vllm_inference).
|