Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
220 changes: 93 additions & 127 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,155 +21,146 @@ limitations under the License.
[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/D92uqZRjCZ)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/ai-dynamo/dynamo)

| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Documentation](https://docs.nvidia.com/dynamo/latest/index.html)** | **[Examples](https://github.com/ai-dynamo/examples)** | **[Design Proposals](https://github.com/ai-dynamo/enhancements)** |
<p align="center">
<a href="https://github.com/ai-dynamo/dynamo/issues/762"><b>Roadmap</b></a> &nbsp;|&nbsp;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why " &nbsp;|&nbsp; "?

the whitespace on both ends undermines the the point of the non-breaking whitespace characters.

<a href="https://docs.nvidia.com/dynamo/latest/index.html"><b>Documentation</b></a> &nbsp;|&nbsp;
<a href="https://github.com/ai-dynamo/examples"><b>Examples</b></a> &nbsp;|&nbsp;
<a href="https://github.com/ai-dynamo/enhancements"><b>Design Proposals</b></a>
</p>

### The Era of Multi-Node, Multi-GPU

![GPU Evolution](./docs/images/frontpage-gpu-evolution.png)
## NVIDIA Dynamo

**High-throughput, low-latency inference framework for serving generative AI and reasoning models in multi-node distributed environments.**

Large language models are quickly outgrowing the memory and compute budget of any single GPU. Tensor-parallelism solves the capacity problem by spreading each layer across many GPUs—and sometimes many servers—but it creates a new one: how do you coordinate those shards, route requests, and share KV cache fast enough to feel like one accelerator? This orchestration gap is exactly what NVIDIA Dynamo is built to close.

![Multi Node Multi-GPU topology](./docs/images/frontpage-gpu-vertical.png)



### Introducing NVIDIA Dynamo

NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
<p align="center">
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we using HTLM so heavily here?

Using HTML in markdown is kind of antithetical to the point of markdown (i.e. utility of reading a plain-text).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it looks a bit nicer. Happy to revert if the reviewers are not a fan

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not so much, as it makes it harder to read it the way I read markdown: as plain-text. 😏

I am not the only audience however.

Copy link
Collaborator

@whoisj whoisj Jul 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you really want to center the text, using the more traditional:

<center>
markdown goes here
</center>

style is easier to read around 😄 tho, to be fair - I've never tested this on GitHub.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not so much, as it makes it harder to read it the way I read markdown: as plain-text. 😏

I am not the only audience however.

Just curious - how do you view the pictures?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

browser - most mostly just read the alt-text.

<img src="./docs/images/frontpage-architecture.png" alt="Dynamo architecture" width="600"/>
</p>

![Dynamo architecture](./docs/images/frontpage-architecture.png)
NVIDIA Dynamo is designed to be inference engine agnostic and captures LLM-specific capabilities such as:

- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade-off between throughput and latency
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **Accelerated data transfer** – Reduces inference response time using NIXL
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput

Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.

## Framework Support Matrix

| Feature | vLLM | SGLang | TensorRT-LLM |
|---------|----------------------|----------------------------|----------------------------------------|
| [**Disaggregated Serving**](../../docs/architecture/disagg_serving.md) | ✅ | ✅ | ✅ |
| [**Conditional Disaggregation**](../../docs/architecture/disagg_serving.md#conditional-disaggregation) | ✅ | 🚧 | 🚧 |
| [**KV-Aware Routing**](../../docs/architecture/kv_cache_routing.md) | ✅ | ✅ | ✅ |
| [**SLA-Based Planner**](../../docs/architecture/sla_planner.md) | ✅ | ❌ | ❌ |
| [**Load Based Planner**](../../docs/architecture/load_planner.md) | ✅ | ❌ | ❌ |
| [**KVBM**](../../docs/architecture/kvbm_architecture.md) | 🚧 | ❌ | ❌ |
Comment on lines +55 to +60
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Broken documentation links – ../../ points outside the repository

Because this README lives at the repository root, the prefix ../../ walks two levels above the repo, resulting in 404s for every link in the Framework Support Matrix:

[**Disaggregated Serving**](../../docs/architecture/disagg_serving.md)

Update the paths to start with docs/… instead.

-[**Disaggregated Serving**](../../docs/architecture/disagg_serving.md)
+[**Disaggregated Serving**](docs/architecture/disagg_serving.md)

Repeat for every entry that currently starts with ../../.

🤖 Prompt for AI Agents
In README.md lines 55 to 60, the documentation links use the prefix ../../ which
points outside the repository and causes 404 errors. Update all these links to
start with docs/ instead of ../../ to correctly reference the files within the
repository.

| **Kubernetes Deployment** | ✅ | 🚧 | 🚧 |

### Installation

The following examples require a few system level packages.
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [docs/support_matrix.md](docs/support_matrix.md)

```
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate

pip install "ai-dynamo[all]"
```
> [!NOTE]
> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.
To learn more about each framework and their capabilities, check out each framework's README!

### Building the Dynamo Base Image
- **[vLLM](examples/llm/README.md)**
- **[SGLang](examples/sglang/README.md)**
- **[TensorRT-LLM](examples/tensorrt_llm/README.md)**

Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
- Docker Hub (docker.io)
- NVIDIA NGC Container Registry (nvcr.io)
- Any private registry
## Deployment Architectures

Here's how to build it:
### Aggregated Serving
Single-instance deployment where both prefill and decode are handled by the same worker.

```bash
./container/build.sh
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
docker login <your-registry>
docker push <your-registry>/dynamo-base:latest-vllm
```

Notes about builds for specific frameworks:
- For specific details on the `--framework vllm` build, see [here](examples/llm/README.md).
- For specific details on the `--framework tensorrtllm` build, see [here](examples/tensorrt_llm/README.md).

Note about AWS environments:
- If deploying Dynamo in AWS, make sure to build the container with EFA support using the `--make-efa` flag.

After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
```bash
export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm
+------+ +-----------+ +------------------+
| HTTP |----->| processor |----->| Worker |
| |<-----| |<-----| (Prefill + |
+------+ +-----------+ | Decode) |
+------------------+
```

> [!NOTE]
> We are working on leaner base images that can be built using the targets in the top-level Earthfile.

### Running and Interacting with an LLM Locally
**Best for:** Small to medium workloads, simple deployment

To run a model and interact with it locally you can call `dynamo
run` with a hugging face model. `dynamo run` supports several backends
including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.

#### Example Command

```
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
```
### Disaggregated Serving
Distributed deployment where prefill and decode are handled by separate, independently scalable workers.

```
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
+------+ +-----------+ +------------------+ +---------------+
| HTTP |----->| processor |----->| Decode Worker |<--->| Prefill |
| |<-----| |<-----| | | Worker |
+------+ +-----------+ +------------------+ +---------------+
|
v
+------------------+
| Prefill Queue |
+------------------+
```

### LLM Serving
**Best for:** High throughput, independent scaling, optimized hardware utilization

Dynamo provides a simple way to spin up a local set of inference
components including:
### KV-Aware Routing
Intelligent request routing based on KV cache hit rates across workers.

- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
- **Workers** – Set of pre-configured LLM serving engines.
```
+------+ +-----------+ +------------------+ +------------------+
| HTTP |----->| processor |----->| KV Router |---->| Worker 1 |
| |<-----| |<-----| |---->| Worker 2 |
+------+ +-----------+ +------------------+ | ... |
| +------------------+
v
+------------------+
| KV Indexer |
+------------------+
```

To run a minimal configuration you can use a pre-configured
example.
**Best for:** High cache hit rates, shared context workloads

#### Start Dynamo Distributed Runtime Services
## Installation

First start the Dynamo Distributed Runtime services:
### Using pip
Using `pip` is our recommended way to install Dynamo.

```bash
docker compose -f deploy/metrics/docker-compose.yml up -d
```
#### Start Dynamo LLM Serving Components
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate

Next serve a minimal configuration with an http server, basic
round-robin router, and a single worker.
pip install "ai-dynamo[all]"
```

### Using conda
```bash
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```
git clone https://github.com/ai-dynamo/dynamo.git
conda activate <ENV_NAME>
pip install nixl # Or install https://github.com/ai-dynamo/nixl from source

#### Send a Request
# To install ai-dynamo-runtime from source# To install ai-dynamo-runtime from source
cargo build --release
cd lib/bindings/python
pip install .
cd ../../../
pip install ".[all]"

```bash
curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"stream":false,
"max_tokens": 300
}' | jq
# To test
docker compose -f deploy/metrics/docker-compose.yml up -d
cd examples/sglang
./launch/agg.sh
```

### Local Development
## Local Development

If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.

Otherwise, to develop locally, we recommend working inside of the container
> [!NOTE]
> If you use vscode or cursor, check out our [.devcontainer setup](.devcontainer/README.md). Otherwise, to develop locally, we recommend working inside of the container.

```bash
# This builds the vllm container by default. You can change the framework by passing the --framework flag.
./container/build.sh
# This will mount your current working dynamo directory inside of the container
./container/run.sh -it --mount-workspace

# Setup dynamo
cargo build --release
mkdir -p /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/http /workspace/deploy/sdk/src/dynamo/sdk/cli/bin
Expand All @@ -178,29 +169,4 @@ cp /workspace/target/release/dynamo-run /workspace/deploy/sdk/src/dynamo/sdk/cli

uv pip install -e .
export PYTHONPATH=$PYTHONPATH:/workspace/deploy/sdk/src:/workspace/components/planner/src
```


#### Conda Environment

Alternately, you can use a conda environment

```bash
conda activate <ENV_NAME>

pip install nixl # Or install https://github.com/ai-dynamo/nixl from source

cargo build --release

# To install ai-dynamo-runtime from source
cd lib/bindings/python
pip install .

cd ../../../
pip install ".[all]"

# To test
docker compose -f deploy/metrics/docker-compose.yml up -d
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```
```
Loading