Skip to content
Merged
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
161 changes: 69 additions & 92 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,87 +21,89 @@ limitations under the License.
[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo)

| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |
| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/dynamo/tree/main/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |

### The Era of Multi-Node, Multi-GPU
# NVIDIA Dynamo

![GPU Evolution](./docs/images/frontpage-gpu-evolution.png)
High-throughput, low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments.

## The Era of Multi-GPU, Multi-Node

Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.

![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png)

<p align="center">
<img src="./docs/images/frontpage-gpu-vertical.png" alt="Multi Node Multi-GPU topology" width="600" />
</p>

Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.

### Introducing NVIDIA Dynamo

NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:

![Dynamo architecture](./docs/images/frontpage-architecture.png)
Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:

- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput

Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
<p align="center">
<img src="./docs/images/frontpage-architecture.png" alt="Dynamo architecture" width="600" />
</p>

## Framework Support Matrix

| Feature | vLLM | SGLang | TensorRT-LLM |
|---------|----------------------|----------------------------|----------------------------------------|
| [**Disaggregated Serving**](/docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ |
| [**Conditional Disaggregation**](/docs/architecture/disagg_serving.md#conditional-disaggregation) | 🚧 | 🚧 | 🚧 |
| [**KV-Aware Routing**](/docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ |
| [**SLA-Based Planner**](/docs/architecture/sla_planner.md) | ✅ | 🚧 | 🚧 |
| [**Load Based Planner**](/docs/architecture/load_planner.md) | ✅ | 🚧 | 🚧 |
| [**KVBM**](/docs/architecture/kvbm_architecture.md) | 🚧 | 🚧 | 🚧 |

To learn more about each framework and their capabilities, check out each framework's README!
- **[vLLM](components/backends/vllm/README.md)**
- **[SGLang](components/backends/sglang/README.md)**
- **[TensorRT-LLM](components/backends/trtllm/README.md)**

### Installation
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.

# Installation

The following examples require a few system level packages.
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)

1. Install etcd and nats
## 1. Initial setup

The Dynamo team recommends the `uv` Python package manager, although any way works. Install uv:
```
curl -LsSf https://astral.sh/uv/install.sh | sh
```

To co-ordinate across the data center Dynamo relies on an etcd and nats cluster. To run locally these need to be available.
### Install etcd and NATS (required)

To coordinate across a data center, Dynamo relies on etcd and NATS. To run Dynamo locally, these need to be available.

- [etcd](https://etcd.io/) can be run directly as `./etcd`.
- [nats](https://nats.io/) needs jetstream enabled: `nats-server -js`.

The Dynamo team recommend the `uv` Python package manager, although anyway works. Install uv:
To quickly setup etcd & NATS, you can also run:
```
curl -LsSf https://astral.sh/uv/install.sh | sh
# At the root of the repository:
docker compose -f deploy/docker-compose.yml up -d
```

## 2. Select an engine

2. Select an engine

We publish Python wheels specialized for each of our supported engines: vllm, sglang, llama.cpp and trtllm. The examples that follow use sglang, read on for other engines.
We publish Python wheels specialized for each of our supported engines: vllm, sglang, trtllm, and llama.cpp. The examples that follow use SGLang; continue reading for other engines.

```
uv venv venv
source venv/bin/activate
uv pip install pip

# Choose one
uv pip install "ai-dynamo[sglang]"
uv pip install "ai-dynamo[vllm]"
uv pip install "ai-dynamo[trtllm]"
uv pip install "ai-dynamo[llama_cpp]" # CPU, see later for GPU
```

### Running and Interacting with an LLM Locally

You can run a model and interact with it locally using commands below.

#### Example Commands

uv pip install "ai-dynamo[sglang]" #replace with [vllm], [trtllm], etc.
```
python -m dynamo.frontend --interactive
python -m dynamo.sglang.worker Qwen/Qwen3-4B
```

```
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
```

If the model is not available locally it will be downloaded from HuggingFace and cached.

You can also pass a local path: `python -m dynamo.sglang.worker --model-path ~/llms/Qwen3-0.6B`
## 3. Run Dynamo

### Running an LLM API server

Expand All @@ -115,7 +117,7 @@ Dynamo provides a simple way to spin up a local set of inference components incl
# Start an OpenAI compatible HTTP server, a pre-processor (prompt templating and tokenization) and a router:
python -m dynamo.frontend [--http-port 8080]

# Start the vllm engine, connecting to nats and etcd to receive requests. You can run several of these,
# Start the SGLang engine, connecting to NATS and etcd to receive requests. You can run several of these,
# both for the same model and for multiple models. The frontend node will discover them.
python -m dynamo.sglang.worker deepseek-ai/DeepSeek-R1-Distill-Llama-8B
```
Expand All @@ -138,13 +140,17 @@ curl localhost:8080/v1/chat/completions -H "Content-Type: application/json"

Rerun with `curl -N` and change `stream` in the request to `true` to get the responses as soon as the engine issues them.

### Engines
### Deploying Dynamo

- Follow the [Quickstart Guide](docs/guides/dynamo_deploy/README.md) to deploy on Kubernetes.
- Check out [Backends](components/backends) to deploy various workflow configurations (e.g. SGLang with router, vLLM with disaggregated serving, etc.)
- Run some [Examples](examples) to learn about building components in Dynamo and exploring various integrations.

In the introduction we installed the `sglang` engine. There are other options.
# Engines

All of these requires nats and etcd, as well as a frontend (`python -m dynamo.frontend [--interactive]`).
Dynamo is designed to be inference engine agnostic. To use any engine with Dynamo, NATS and etcd need to be installed, along with a Dynamo frontend (`python -m dynamo.frontend [--interactive]`).

# vllm
## vLLM

```
uv pip install ai-dynamo[vllm]
Expand All @@ -155,26 +161,26 @@ Run the backend/worker like this:
python -m dynamo.vllm --help
```

vllm attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.
vLLM attempts to allocate enough KV cache for the full context length at startup. If that does not fit in your available memory pass `--context-length <value>`.

To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.

# sglang
## SGLang

```
uv pip install ai-dynamo[sglang]
```

Run the backend/worker like this:
```
python -m dynamo.sglang.worker --help
python -m dynamo.sglang.worker --help #Note the '.worker' in the module path for SGLang
```

You can pass any sglang flags directly to this worker, see https://docs.sglang.ai/backend/server_arguments.html . See there to use multiple GPUs.

# TRT-LLM
## TensorRT-LLM

It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running TensorRT-LLM engine.
It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) for running the TensorRT-LLM engine.

> [!Note]
> Ensure that you select a PyTorch container image version that matches the version of TensorRT-LLM you are using.
Expand All @@ -184,7 +190,7 @@ It is recommended to use [NGC PyTorch Container](https://catalog.ngc.nvidia.com/
> [!Important]
> Launch container with the following additional settings `--shm-size=1g --ulimit memlock=-1`

## Install prerequites
### Install prerequisites
```
# Optional step: Only required for Blackwell and Grace Hopper
pip3 install torch==2.7.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
Expand All @@ -195,7 +201,7 @@ sudo apt-get -y install libopenmpi-dev
> [!Tip]
> You can learn more about these prequisites and known issues with TensorRT-LLM pip based installation [here](https://nvidia.github.io/TensorRT-LLM/installation/linux.html).

## Install dynamo
### After installing the pre-requisites above, install Dynamo
```
uv pip install --upgrade pip setuptools && uv pip install ai-dynamo[trtllm]
```
Expand All @@ -207,33 +213,9 @@ python -m dynamo.trtllm --help

To specify which GPUs to use set environment variable `CUDA_VISIBLE_DEVICES`.

# llama.cpp

To install llama.cpp for CPU inference:
```
uv pip install ai-dynamo[llama_cpp]
```

To build llama.cpp for CUDA:
```
pip install llama-cpp-python -C cmake.args="-DGGML_CUDA=on"
uv pip install uvloop ai-dynamo
```

At time of writing the `uv pip` version does not support that syntax, so use `pip` directly inside the venv.

To build llama.cpp for other accelerators see https://pypi.org/project/llama-cpp-python/ .

Download a GGUF and run the engine like this:
```
python -m dynamo.llama_cpp --model-path ~/llms/Qwen3-0.6B-Q8_0.gguf
```
# Developing Locally

If you have multiple GPUs, llama.cpp does automatic tensor parallelism. You do not need to pass any extra flags to dynamo-run to enable it.

### Local Development

1. Install libraries
## 1. Install libraries

**Ubuntu:**
```
Expand All @@ -257,36 +239,36 @@ xcrun -sdk macosx metal
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.


2. Install Rust
## 2. Install Rust

```
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
```

3. Create a Python virtual env:
## 3. Create a Python virtual env:

```
uv venv dynamo
source dynamo/bin/activate
```

4. Install build tools
## 4. Install build tools

```
uv pip install pip maturin
```

[Maturin](https://github.com/PyO3/maturin) is the Rust<->Python bindings build tool.

5. Build the Rust bindings
## 5. Build the Rust bindings

```
cd lib/bindings/python
maturin develop --uv
```

6. Install the wheel
## 6. Install the wheel

```
cd $PROJECT_ROOT
Expand All @@ -302,8 +284,3 @@ Remember that nats and etcd must be running (see earlier).
Set the environment variable `DYN_LOG` to adjust the logging level; for example, `export DYN_LOG=debug`. It has the same syntax as `RUST_LOG`.

If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.

### Deployment to Kubernetes

Follow the [Quickstart Guide](docs/guides/dynamo_deploy/quickstart.md) to deploy to Kubernetes.

80 changes: 80 additions & 0 deletions components/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
<!--
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# Dynamo Components

This directory contains the core components that make up the Dynamo inference framework. Each component serves a specific role in the distributed LLM serving architecture, enabling high-throughput, low-latency inference across multiple nodes and GPUs.

## Supported Inference Engines

Dynamo supports multiple inference engines (with a focus on SGLang, vLLM, and TensorRT-LLM), each with their own deployment configurations and capabilities:

- **[vLLM](backends/vllm/README.md)** - High-performance LLM inference with native KV cache events and NIXL-based transfer mechanisms
- **[SGLang](backends/sglang/README.md)** - Structured generation language framework with ZMQ-based communication
- **[TensorRT-LLM](backends/trtllm/README.md)** - NVIDIA's optimized LLM inference engine with TensorRT acceleration

Each engine provides launch scripts for different deployment patterns in their respective `/launch` & `/deploy` directories.

## Core Components

### [Backends](backends/)

The backends directory contains inference engine integrations and implementations, with a key focus on:

- **vLLM** - Full-featured vLLM integration with disaggregated serving, KV-aware routing, and SLA-based planning
- **SGLang** - SGLang engine integration supporting disaggregated serving and KV-aware routing
- **TensorRT-LLM** - TensorRT-LLM integration with disaggregated serving capabilities


### [Frontend](frontend/)

The frontend component provides the HTTP API layer and request processing:

- **OpenAI-compatible HTTP server** - RESTful API endpoint for LLM inference requests
- **Pre-processor** - Handles request preprocessing and validation
- **Router** - Routes requests to appropriate workers based on load and KV cache state
- **Auto-discovery** - Automatically discovers and registers available workers

### [Router](router/)

A high-performance request router written in Rust that:

- Routes incoming requests to optimal workers based on KV cache state
- Implements KV-aware routing to minimize cache misses
- Provides load balancing across multiple worker instances
- Supports both aggregated and disaggregated serving patterns

### [Planner](planner/)

The planner component monitors system state and dynamically adjusts worker allocation:

- **Dynamic scaling** - Scales prefill/decode workers up and down based on metrics
- **Multiple backends** - Supports local (circus-based) and Kubernetes scaling
- **SLA-based planning** - Ensures inference performance targets are met
- **Load-based planning** - Optimizes resource utilization based on demand

## Getting Started

To get started with Dynamo components:

1. **Choose an inference engine** from the supported backends
2. **Set up required services** (etcd and NATS) using Docker Compose
3. **Configure** your chosen engine using Python wheels or building an image
4. **Run deployment scripts** from the engine's launch directory
5. **Monitor performance** using the metrics component

For detailed instructions, see the README files in each component directory and the main [Dynamo documentation](../../docs/).
Loading