Stop using title frontmatter and fix doc that can only be reached by search (#20623)

Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
This commit is contained in:
Harry Mellor
2025-07-08 11:27:40 +01:00
committed by GitHub
parent b4bab81660
commit b942c094e3
81 changed files with 82 additions and 238 deletions

View File

@@ -1,6 +1,4 @@
---
title: KServe
---
# KServe
vLLM can be deployed with [KServe](https://github.com/kserve/kserve) on Kubernetes for highly scalable distributed model serving.

View File

@@ -1,6 +1,4 @@
---
title: KubeAI
---
# KubeAI
[KubeAI](https://github.com/substratusai/kubeai) is a Kubernetes operator that enables you to deploy and manage AI models on Kubernetes. It provides a simple and scalable way to deploy vLLM in production. Functionality such as scale-from-zero, load based autoscaling, model caching, and much more is provided out of the box with zero external dependencies.

View File

@@ -1,6 +1,4 @@
---
title: Llama Stack
---
# Llama Stack
vLLM is also available via [Llama Stack](https://github.com/meta-llama/llama-stack) .

View File

@@ -1,6 +1,4 @@
---
title: llmaz
---
# llmaz
[llmaz](https://github.com/InftyAI/llmaz) is an easy-to-use and advanced inference platform for large language models on Kubernetes, aimed for production use. It uses vLLM as the default model serving backend.

View File

@@ -1,6 +1,4 @@
---
title: Production stack
---
# Production stack
Deploying vLLM on Kubernetes is a scalable and efficient way to serve machine learning models. This guide walks you through deploying vLLM using the [vLLM production stack](https://github.com/vllm-project/production-stack). Born out of a Berkeley-UChicago collaboration, [vLLM production stack](https://github.com/vllm-project/production-stack) is an officially released, production-optimized codebase under the [vLLM project](https://github.com/vllm-project), designed for LLM deployment with: