From f46827d037f5657827d219c3c7d4419b5c901764 Mon Sep 17 00:00:00 2001 From: Anish <80174047+athreesh@users.noreply.github.com> Date: Sun, 27 Jul 2025 22:02:02 -0700 Subject: [PATCH 01/11] Update to README.md reordered info, cleaned up instructions, added context Signed-off-by: Anish <80174047+athreesh@users.noreply.github.com> --- README.md | 97 ++++++++++++++++++++++++++++++------------------------- 1 file changed, 53 insertions(+), 44 deletions(-) diff --git a/README.md b/README.md index cb58211d115..1b8b3d98bd0 100644 --- a/README.md +++ b/README.md @@ -21,24 +21,21 @@ limitations under the License. [![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ) [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo) -| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** | +| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/dynamo/tree/main/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** | -### The Era of Multi-Node, Multi-GPU +# NVIDIA Dynamo -![GPU Evolution](./docs/images/frontpage-gpu-evolution.png) +High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. +## The Era of Multi-GPU, Multi-Node -Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close. +![GPU Evolution](./docs/images/frontpage-gpu-evolution.png) ![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png) +Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close. - -### Introducing NVIDIA Dynamo - -NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as: - -![Dynamo architecture](./docs/images/frontpage-architecture.png) +Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as: - **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency. - **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand @@ -46,30 +43,46 @@ NVIDIA Dynamo is a high-throughput low-latency inference framework designed for - **Accelerated data transfer** – Reduces inference response time using NIXL. - **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput -Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach. +## Framework Support Matrix + +| Feature | vLLM | SGLang | TensorRT-LLM | +|---------|----------------------|----------------------------|----------------------------------------| +| [**Disaggregated Serving**](../../docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | +| [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | ✅ | 🚧 | 🚧 | +| [**KV-Aware Routing**](../../docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | +| [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | ❌ | ❌ | +| [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | ❌ | ❌ | +| [**KVBM**](../../docs/architecture/kvbm_architecture.md) | 🚧 | ❌ | ❌ | +To learn more about each framework and their capabilities, check out each framework's README! +- **[vLLM](components/backends/vllm/README.md)** +- **[SGLang](components/backends/sglang/README.md)** +- **[TensorRT-LLM](components/backends/trtllm/README.md)** +Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach. -### Installation +# Installation The following examples require a few system level packages. Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md) -1. Install etcd and nats - -To co-ordinate across the data center Dynamo relies on an etcd and nats cluster. To run locally these need to be available. +## 1. Initial setup -- [etcd](https://etcd.io/) can be run directly as `./etcd`. -- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`. - -The Dynamo team recommend the `uv` Python package manager, although anyway works. Install uv: +The Dynamo team recommends the `uv` Python package manager, although any way works. Install uv: ``` curl -LsSf https://astral.sh/uv/install.sh | sh ``` -2. Select an engine +### Install etcd and NATS (required) + +To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available. + +- [etcd](https://etcd.io/) can be run directly as `./etcd`. +- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`. + +## 2. Select an engine -We publish Python wheels specialized for each of our supported engines: vllm, sglang, llama.cpp and trtllm. The examples that follow use sglang, read on for other engines. +We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines. ``` uv venv venv @@ -77,13 +90,12 @@ source venv/bin/activate uv pip install pip # Choose one -uv pip install "ai-dynamo[sglang]" -uv pip install "ai-dynamo[vllm]" -uv pip install "ai-dynamo[trtllm]" -uv pip install "ai-dynamo[llama_cpp]" # CPU, see later for GPU +uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc. ``` -### Running and Interacting with an LLM Locally +## 3. Run Dynamo + +### Running and Interacting with a LLM Locally You can run a model and interact with it locally using commands below. @@ -99,7 +111,7 @@ python -m dynamo.sglang.worker Qwen/Qwen3-4B Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ... ``` -If the model is not available locally it will be downloaded from HuggingFace and cached. +If the model is not available locally, it will be downloaded from HuggingFace and cached. You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B` @@ -138,13 +150,15 @@ curl localhost:8080/v1/chat/completions -H "Content-Type: application/json" Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them. -### Engines +### Deploying Dynamo on Kubernetes + +Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy to Kubernetes. -In the introduction we installed the `sglang` engine. There are other options. +# Engines -All of these requires nats and etcd, as well as a frontend (`python -m dynamo.frontend [--interactive]`). +Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`). -# vllm +## vLLM ``` uv pip install ai-dynamo[vllm] @@ -155,11 +169,11 @@ Run the backend/worker like this: python -m dynamo.vllm --help ``` -vllm attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length `. +vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length `. To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`. -# sglang +## SGLang ``` uv pip install ai-dynamo[sglang] @@ -172,9 +186,9 @@ python -m dynamo.sglang.worker --help You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs. -# TRT-LLM +## TensorRT-LLM -It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running TensorRT-LLM engine. +It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine. > [!Note] > Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using. @@ -184,7 +198,7 @@ It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/ > [!Important] > Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1` -## Install prerequites +### Install prerequites ``` # Optional step: Only required for Blackwell and Grace Hopper pip3 install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 @@ -195,7 +209,7 @@ sudo apt-get -y install libopenmpi-dev > [!Tip] > You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html). -## Install dynamo +### Now install Dynamo ``` uv pip install --upgrade pip setuptools && uv pip install ai-dynamo[trtllm] ``` @@ -207,7 +221,7 @@ python -m dynamo.trtllm --help To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`. -# llama.cpp +## llama.cpp To install llama.cpp for CPU inference: ``` @@ -231,7 +245,7 @@ python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it. -### Local Development +# Developing Locally 1. Install libraries @@ -302,8 +316,3 @@ Remember that nats and etcd must be running (see earlier). Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`. If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details. - -### Deployment to Kubernetes - -Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md) to deploy to Kubernetes. - From 33be4f1944b237f41fd02de45e6dce443758c22f Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 09:55:29 -0700 Subject: [PATCH 02/11] added NATS/etcd to ReadME + created ReadME for components --- README.md | 7 ++- components/README.md | 114 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 120 insertions(+), 1 deletion(-) create mode 100644 components/README.md diff --git a/README.md b/README.md index 1b8b3d98bd0..ebad71cb288 100644 --- a/README.md +++ b/README.md @@ -79,6 +79,11 @@ To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynam - [etcd](https://etcd.io/) can be run directly as `./etcd`. - [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`. + +To quickly setup etcd & NATS, you can also run: +``` +docker compose -f ./deploy/metrics/docker-compose.yml up +``` ## 2. Select an engine @@ -198,7 +203,7 @@ It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/ > [!Important] > Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1` -### Install prerequites +### Install prerequisites ``` # Optional step: Only required for Blackwell and Grace Hopper pip3 install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 diff --git a/components/README.md b/components/README.md new file mode 100644 index 00000000000..5ad9000aefb --- /dev/null +++ b/components/README.md @@ -0,0 +1,114 @@ + + +# Dynamo Components + +This directory contains the core components that make up the Dynamo inference framework. Each component serves a specific role in the distributed LLM serving architecture, enabling high-throughput, low-latency inference across multiple nodes and GPUs. + +## Supported Inference Engines + +Dynamo supports multiple inference engines (with a focus on SGLang, vLLM, and TensorRT-LLM), each with their own deployment configurations and capabilities: + +- **[vLLM](backends/vllm/README.md)** - High-performance LLM inference with native KV cache events and NIXL-based transfer mechanisms +- **[SGLang](backends/sglang/README.md)** - Structured generation language framework with ZMQ-based communication +- **[TensorRT-LLM](backends/trtllm/README.md)** - NVIDIA's optimized LLM inference engine with TensorRT acceleration + +Each engine provides launch scripts for different deployment patterns in their respective `/launch` & `/deploy` directories: +- Aggregated serving +- Aggregated serving with KV routing +- Disaggregated serving +- Disaggregated serving with KV routing + +## Core Components + +### [Backends](backends/) + +The backends directory contains inference engine integrations and implementations, with a key focus on: + +- **vLLM** - Full-featured vLLM integration with disaggregated serving, KV-aware routing, and SLA-based planning +- **SGLang** - SGLang engine integration supporting disaggregated serving and KV-aware routing +- **TensorRT-LLM** - TensorRT-LLM integration with disaggregated serving capabilities + + +### [Frontend](frontend/) + +The frontend component provides the HTTP API layer and request processing: + +- **OpenAI-compatible HTTP server** - RESTful API endpoint for LLM inference requests +- **Pre-processor** - Handles request preprocessing and validation +- **Router** - Routes requests to appropriate workers based on load and KV cache state +- **Auto-discovery** - Automatically discovers and registers available workers + +### [Router](router/) + +A high-performance request router written in Rust that: + +- Routes incoming requests to optimal workers based on KV cache state +- Implements KV-aware routing to minimize cache misses +- Provides load balancing across multiple worker instances +- Supports both aggregated and disaggregated serving patterns + +### [Planner](planner/) + +The planner component monitors system state and dynamically adjusts worker allocation: + +- **Dynamic scaling** - Scales prefill/decode workers up and down based on metrics +- **Multiple backends** - Supports local (circus-based) and Kubernetes scaling +- **SLA-based planning** - Ensures performance targets are met +- **Load-based planning** - Optimizes resource utilization based on demand + +### [Metrics](metrics/) + +The metrics component collects, aggregates, and exposes system metrics: + +- **Prometheus-compatible endpoint** - Exposes metrics in standard Prometheus format +- **Real-time monitoring** - Collects statistics from workers and components +- **Visualization support** - Integrates with Grafana for dashboard creation +- **Push/Pull modes** - Supports both push and pull-based metric collection + +## Component Architecture + +``` +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Frontend │ │ Router │ │ Backends │ +│ │ │ │ │ │ +│ • HTTP Server │◄──►│ • KV Routing │◄──►│ • vLLM │ +│ • Pre-processor │ │ • Load Balance │ │ • SGLang │ +│ • Auto-discovery│ │ • Cache Aware │ │ • TensorRT-LLM │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ + │ │ │ + ▼ ▼ ▼ +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Planner │ │ Metrics │ │ Workers │ +│ │ │ │ │ │ +│ • Auto-scaling │ │ • Collection │ │ • Prefill │ +│ • SLA Planning │ │ • Aggregation │ │ • Decode │ +│ • Load Planning │ │ • Visualization │ │ • KV Cache │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ +``` + +## Getting Started + +To get started with Dynamo components: + +1. **Choose an inference engine** from the supported backends +2. **Set up required services** (etcd and NATS) using Docker Compose +3. **Configure** your chosen engine using Python wheels or building an image +4. **Run deployment scripts** from the engine's launch directory +5. **Monitor performance** using the metrics component + +For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/). \ No newline at end of file From 51609784b6c6f987a37cece72c46993497da3365 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 10:01:46 -0700 Subject: [PATCH 03/11] changes to root ReadME + added in ReadME for components --- README.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index ebad71cb288..ef931687043 100644 --- a/README.md +++ b/README.md @@ -155,9 +155,11 @@ curl localhost:8080/v1/chat/completions -H "Content-Type: application/json" Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them. -### Deploying Dynamo on Kubernetes +### Deploying Dynamo -Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy to Kubernetes. +Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes. +Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.) +Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations. # Engines @@ -248,7 +250,7 @@ Download a GGUF and run the engine like this: python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf ``` -If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it. +If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to enable it. # Developing Locally From c4df262e920f5e0e0388783405f1dd3556ebd8f7 Mon Sep 17 00:00:00 2001 From: Anish <80174047+athreesh@users.noreply.github.com> Date: Mon, 28 Jul 2025 10:03:24 -0700 Subject: [PATCH 04/11] Update components README.md Signed-off-by: Anish <80174047+athreesh@users.noreply.github.com> --- components/README.md | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/components/README.md b/components/README.md index 5ad9000aefb..17b540938e2 100644 --- a/components/README.md +++ b/components/README.md @@ -27,11 +27,7 @@ Dynamo supports multiple inference engines (with a focus on SGLang, vLLM, and Te - **[SGLang](backends/sglang/README.md)** - Structured generation language framework with ZMQ-based communication - **[TensorRT-LLM](backends/trtllm/README.md)** - NVIDIA's optimized LLM inference engine with TensorRT acceleration -Each engine provides launch scripts for different deployment patterns in their respective `/launch` & `/deploy` directories: -- Aggregated serving -- Aggregated serving with KV routing -- Disaggregated serving -- Disaggregated serving with KV routing +Each engine provides launch scripts for different deployment patterns in their respective `/launch` & `/deploy` directories. ## Core Components @@ -111,4 +107,4 @@ To get started with Dynamo components: 4. **Run deployment scripts** from the engine's launch directory 5. **Monitor performance** using the metrics component -For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/). \ No newline at end of file +For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/). From 254e7111e3879fc1e00856e49148d3145108a940 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 12:51:54 -0700 Subject: [PATCH 05/11] fix docker compose NATS command --- README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index ef931687043..6d8fa68e7e7 100644 --- a/README.md +++ b/README.md @@ -82,7 +82,8 @@ To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynam To quickly setup etcd & NATS, you can also run: ``` -docker compose -f ./deploy/metrics/docker-compose.yml up +# At the root of the repository: +docker compose -f deploy/docker-compose.yml up -d ``` ## 2. Select an engine From ce18586fcb9016bf463821ce75d2d6d6c7ce23d5 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 13:15:15 -0700 Subject: [PATCH 06/11] address feedback from Itay, smaller images + updates to progress --- README.md | 82 ++++++++++++++----------------------------------------- 1 file changed, 21 insertions(+), 61 deletions(-) diff --git a/README.md b/README.md index 6d8fa68e7e7..a987aaa6c3a 100644 --- a/README.md +++ b/README.md @@ -29,9 +29,9 @@ High-throughput, low-latency inference framework designed for serving generative ## The Era of Multi-GPU, Multi-Node -![GPU Evolution](./docs/images/frontpage-gpu-evolution.png) - -![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png) +

+ Multi Node Multi-GPU topology +

Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close. @@ -43,6 +43,10 @@ Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLa - **Accelerated data transfer** – Reduces inference response time using NIXL. - **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput +

+ Dynamo architecture +

+ ## Framework Support Matrix | Feature | vLLM | SGLang | TensorRT-LLM | @@ -50,9 +54,9 @@ Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLa | [**Disaggregated Serving**](../../docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | | [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | ✅ | 🚧 | 🚧 | | [**KV-Aware Routing**](../../docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | -| [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | ❌ | ❌ | -| [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | ❌ | ❌ | -| [**KVBM**](../../docs/architecture/kvbm_architecture.md) | 🚧 | ❌ | ❌ | +| [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | 🚧 | 🚧 | +| [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | 🚧 | 🚧 | +| [**KVBM**](../../docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 | To learn more about each framework and their capabilities, check out each framework's README! - **[vLLM](components/backends/vllm/README.md)** @@ -101,26 +105,6 @@ uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc. ## 3. Run Dynamo -### Running and Interacting with a LLM Locally - -You can run a model and interact with it locally using commands below. - -#### Example Commands - -``` -python -m dynamo.frontend --interactive -python -m dynamo.sglang.worker Qwen/Qwen3-4B -``` - -``` -✔ User · Hello, how are you? -Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ... -``` - -If the model is not available locally, it will be downloaded from HuggingFace and cached. - -You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B` - ### Running an LLM API server Dynamo provides a simple way to spin up a local set of inference components including: @@ -133,7 +117,7 @@ Dynamo provides a simple way to spin up a local set of inference components incl # Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router: python -m dynamo.frontend [--http-port 8080] -# Start the vllm engine, connecting to nats and etcd to receive requests. You can run several of these, +# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these, # both for the same model and for multiple models. The frontend node will discover them. python -m dynamo.sglang.worker deepseek-ai/DeepSeek-R1-Distill-Llama-8B ``` @@ -158,9 +142,9 @@ Rerun with `curl -N` and change `stream` in the request to `true` to get the res ### Deploying Dynamo -Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes. -Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.) -Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations. +- Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes. +- Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.) +- Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations. # Engines @@ -189,7 +173,7 @@ uv pip install ai-dynamo[sglang] Run the backend/worker like this: ``` -python -m dynamo.sglang.worker --help +python -m dynamo.sglang.worker --help #Note the '.worker' in the module path for SGLang ``` You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs. @@ -229,33 +213,9 @@ python -m dynamo.trtllm --help To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`. -## llama.cpp - -To install llama.cpp for CPU inference: -``` -uv pip install ai-dynamo[llama_cpp] -``` - -To build llama.cpp for CUDA: -``` -pip install llama-cpp-python -C cmake.args="-DGGML_CUDA=on" -uv pip install uvloop ai-dynamo -``` - -At time of writing the `uv pip` version does not support that syntax, so use `pip` directly inside the venv. - -To build llama.cpp for other accelerators see https://pypi.org/project/llama-cpp-python/ . - -Download a GGUF and run the engine like this: -``` -python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf -``` - -If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to enable it. - # Developing Locally -1. Install libraries +## 1. Install libraries **Ubuntu:** ``` @@ -279,21 +239,21 @@ xcrun -sdk macosx metal If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly. -2. Install Rust +## 2. Install Rust ``` curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env ``` -3. Create a Python virtual env: +## 3. Create a Python virtual env: ``` uv venv dynamo source dynamo/bin/activate ``` -4. Install build tools +## 4. Install build tools ``` uv pip install pip maturin @@ -301,14 +261,14 @@ uv pip install pip maturin [Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool. -5. Build the Rust bindings +## 5. Build the Rust bindings ``` cd lib/bindings/python maturin develop --uv ``` -6. Install the wheel +## 6. Install the wheel ``` cd $PROJECT_ROOT From 5bf38d05b0ce7916439d81511f993e1540d061e5 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 16:32:16 -0700 Subject: [PATCH 07/11] neal adjustments --- README.md | 2 +- components/README.md | 21 --------------------- 2 files changed, 1 insertion(+), 22 deletions(-) diff --git a/README.md b/README.md index a987aaa6c3a..5aad4c3489a 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLa | Feature | vLLM | SGLang | TensorRT-LLM | |---------|----------------------|----------------------------|----------------------------------------| | [**Disaggregated Serving**](../../docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | -| [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | ✅ | 🚧 | 🚧 | +| [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 | | [**KV-Aware Routing**](../../docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | | [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | 🚧 | 🚧 | | [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | 🚧 | 🚧 | diff --git a/components/README.md b/components/README.md index 17b540938e2..95565f637a0 100644 --- a/components/README.md +++ b/components/README.md @@ -76,27 +76,6 @@ The metrics component collects, aggregates, and exposes system metrics: - **Visualization support** - Integrates with Grafana for dashboard creation - **Push/Pull modes** - Supports both push and pull-based metric collection -## Component Architecture - -``` -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Frontend │ │ Router │ │ Backends │ -│ │ │ │ │ │ -│ • HTTP Server │◄──►│ • KV Routing │◄──►│ • vLLM │ -│ • Pre-processor │ │ • Load Balance │ │ • SGLang │ -│ • Auto-discovery│ │ • Cache Aware │ │ • TensorRT-LLM │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ - │ │ │ - ▼ ▼ ▼ -┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ -│ Planner │ │ Metrics │ │ Workers │ -│ │ │ │ │ │ -│ • Auto-scaling │ │ • Collection │ │ • Prefill │ -│ • SLA Planning │ │ • Aggregation │ │ • Decode │ -│ • Load Planning │ │ • Visualization │ │ • KV Cache │ -└─────────────────┘ └─────────────────┘ └─────────────────┘ -``` - ## Getting Started To get started with Dynamo components: From 2f63f1fdef2a54e417e9d9b6be29bc043417e594 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 20:34:16 -0700 Subject: [PATCH 08/11] removing metrics highlight per ryan suggestion --- components/README.md | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/components/README.md b/components/README.md index 95565f637a0..dc13b05cd37 100644 --- a/components/README.md +++ b/components/README.md @@ -64,18 +64,9 @@ The planner component monitors system state and dynamically adjusts worker alloc - **Dynamic scaling** - Scales prefill/decode workers up and down based on metrics - **Multiple backends** - Supports local (circus-based) and Kubernetes scaling -- **SLA-based planning** - Ensures performance targets are met +- **SLA-based planning** - Ensures inference performance targets are met - **Load-based planning** - Optimizes resource utilization based on demand -### [Metrics](metrics/) - -The metrics component collects, aggregates, and exposes system metrics: - -- **Prometheus-compatible endpoint** - Exposes metrics in standard Prometheus format -- **Real-time monitoring** - Collects statistics from workers and components -- **Visualization support** - Integrates with Grafana for dashboard creation -- **Push/Pull modes** - Supports both push and pull-based metric collection - ## Getting Started To get started with Dynamo components: From 54957c86c3739861025b7f20b7acb7235df319c0 Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 20:35:21 -0700 Subject: [PATCH 09/11] ryan suggestion on header --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 5aad4c3489a..afbc4027927 100644 --- a/README.md +++ b/README.md @@ -201,7 +201,7 @@ sudo apt-get -y install libopenmpi-dev > [!Tip] > You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html). -### Now install Dynamo +### After installing the pre-requisites above, install Dynamo ``` uv pip install --upgrade pip setuptools && uv pip install ai-dynamo[trtllm] ``` From f828878f321f46c7a28e26ca9ac3f638d028d1fe Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 20:37:09 -0700 Subject: [PATCH 10/11] fix framework matrix hyperlinks --- README.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index afbc4027927..f0e872e3433 100644 --- a/README.md +++ b/README.md @@ -51,12 +51,12 @@ Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLa | Feature | vLLM | SGLang | TensorRT-LLM | |---------|----------------------|----------------------------|----------------------------------------| -| [**Disaggregated Serving**](../../docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | -| [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 | -| [**KV-Aware Routing**](../../docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | -| [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | 🚧 | 🚧 | -| [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | 🚧 | 🚧 | -| [**KVBM**](../../docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 | +| [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ | +| [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 | +| [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ | +| [**SLA-Based Planner**](/docs/architecture/sla_planner.md) | ✅ | 🚧 | 🚧 | +| [**Load Based Planner**](/docs/architecture/load_planner.md) | ✅ | 🚧 | 🚧 | +| [**KVBM**](/docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 | To learn more about each framework and their capabilities, check out each framework's README! - **[vLLM](components/backends/vllm/README.md)** From 4b4ea9c4f113cc867a1f612bfc063baa5221b4ce Mon Sep 17 00:00:00 2001 From: athreesh Date: Mon, 28 Jul 2025 21:08:21 -0700 Subject: [PATCH 11/11] fix: Remove trailing whitespace (pre-commit hook) --- README.md | 2 +- components/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index f0e872e3433..389221f3ae0 100644 --- a/README.md +++ b/README.md @@ -89,7 +89,7 @@ To quickly setup etcd & NATS, you can also run: # At the root of the repository: docker compose -f deploy/docker-compose.yml up -d ``` - + ## 2. Select an engine We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines. diff --git a/components/README.md b/components/README.md index dc13b05cd37..2c5677eae75 100644 --- a/components/README.md +++ b/components/README.md @@ -77,4 +77,4 @@ To get started with Dynamo components: 4. **Run deployment scripts** from the engine's launch directory 5. **Monitor performance** using the metrics component -For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/). +For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/).