Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
42 changes: 42 additions & 0 deletions examples/basics/kubernetes/shared_frontend/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Shared Dynamo Frontend
This folder contains kubernetes manifests to deploy Dynamo frontend component as a standalone DynamoGraphDeployment (DGD)
and two models.
Frontend is shared across the two models. Frontend is deployed to dynamo namespace `dynamo`, which is a reserved namespace name for frontend to observe deployed models across all dynamo namespaces.
A shared PVC is configured to store model checkpoint weights fetched from Hugging Face.

1. Install Dynamo k8s platform helm chart
2. Create a K8S secret with your Huggingface token and then render k8s manifests
```sh
export HF_TOKEN=YOUR_HF_TOKEN
kubectl create secret generic hf-token-secret \
--from-literal=HF_TOKEN=${HF_TOKEN} \
--namespace ${NAMESPACE}
kubectl apply -f shared_frontend.yaml --namespace ${NAMESPACE}
```
3. Testing the deployment and run benchmarks
After deployment, forward the frontend service to access the API:
```sh
kubectl port-forward svc/frontend-frontend 8000:8000 -n ${NAMESPACE}
```
confirm both deployed models are present in the model listing:
```sh
curl localhost:8000/v1/models
{"object":"list","data":[{"id":"Qwen/Qwen3-0.6B","object":"object","created":1759458713,"owned_by":"nvidia"},{"id":"Qwen/Qwen2.5-VL-7B-Instruct","object":"object","created":1759458718,"owned_by":"nvidia"}]}
```
and use following request to test one of the deployed model
```sh
curl localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "Qwen/Qwen3-0.6B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream": false,
"max_tokens": 30
}'
```
You can also benchmark the performance of the endpoint by [GenAI-Perf](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/perf_analyzer/genai-perf/README.html)
125 changes: 125 additions & 0 deletions examples/basics/kubernetes/shared_frontend/shared_frontend.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dynamo-model-cache
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
---
apiVersion: nvidia.com/v1alpha1
kind: DynamoGraphDeployment
metadata:
name: frontend
spec:
services:
Frontend:
componentType: frontend
dynamoNamespace: dynamo
replicas: 1
extraPodSpec:
mainContainer:
image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
---
apiVersion: nvidia.com/v1alpha1
kind: DynamoGraphDeployment
metadata:
name: vllm-agg
spec:
services:
VllmDecodeWorker:
pvc:
create: false
name: dynamo-model-cache
mountPoint: /root/.cache/huggingface
envFromSecret: hf-token-secret
dynamoNamespace: vllm-agg
componentType: worker
replicas: 1
resources:
limits:
gpu: "1"
extraPodSpec:
mainContainer:
image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
workingDir: /workspace/components/backends/vllm
command:
- /bin/sh
- -c
args:
- python3 -m dynamo.vllm --model Qwen/Qwen3-0.6B
---
apiVersion: nvidia.com/v1alpha1
kind: DynamoGraphDeployment
metadata:
name: agg-qwen
spec:
backendFramework: vllm
services:
EncodeWorker:
pvc:
create: false
name: dynamo-model-cache
mountPoint: /root/.cache/huggingface
envFromSecret: hf-token-secret
dynamoNamespace: agg-qwen
componentType: worker
replicas: 1
resources:
limits:
gpu: "1"
extraPodSpec:
mainContainer:
image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
workingDir: /workspace/examples/multimodal
command:
- /bin/sh
- -c
args:
- python3 components/encode_worker.py --model Qwen/Qwen2.5-VL-7B-Instruct
VLMWorker:
pvc:
create: false
name: dynamo-model-cache
mountPoint: /root/.cache/huggingface
envFromSecret: hf-token-secret
dynamoNamespace: agg-qwen
componentType: worker
replicas: 1
resources:
limits:
gpu: "1"
extraPodSpec:
mainContainer:
image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
workingDir: /workspace/examples/multimodal
command:
- /bin/sh
- -c
args:
- python3 components/worker.py --model Qwen/Qwen2.5-VL-7B-Instruct --worker-type prefill
Processor:
pvc:
create: false
name: dynamo-model-cache
mountPoint: /root/.cache/huggingface
envFromSecret: hf-token-secret
dynamoNamespace: agg-qwen
componentType: worker
replicas: 1
resources:
limits:
gpu: "1"
extraPodSpec:
mainContainer:
image: nvcr.io/nvidia/ai-dynamo/vllm-runtime:0.5.0
workingDir: /workspace/examples/multimodal
command:
- /bin/sh
- -c
args:
- 'python3 components/processor.py --model Qwen/Qwen2.5-VL-7B-Instruct --prompt-template "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|><prompt><|im_end|>\n<|im_start|>assistant\n"'
Loading