Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
86c79ba
some prelim cleanups
PeaBrane May 30, 2025
6bee243
router can route to dp ranks
PeaBrane May 30, 2025
dab052c
make the bunny hoppy
PeaBrane May 30, 2025
be6900e
Merge remote-tracking branch 'origin/main' into rupei/router-general
PeaBrane May 30, 2025
25e1291
Merge remote-tracking branch 'origin/main' into rupei/router-general
PeaBrane May 30, 2025
34e5c5b
new struct combining worker_id with dp_rank, dirty commit, breaks bin…
PeaBrane May 30, 2025
2cef74c
binding works
PeaBrane May 30, 2025
10d3326
dummy c binding note
PeaBrane May 30, 2025
4483c68
add_class WorkerWithDpRank
PeaBrane May 30, 2025
263c12d
renames + comments + fmt
PeaBrane May 31, 2025
65ea6b5
allow suffix for dp_rank identification
PeaBrane Jun 3, 2025
a2ef896
WIP: fix fn dp_rank, add TODO's
alec-flowers Jun 3, 2025
e80d66c
refactor: fix bugs, kv publishing working
alec-flowers Jun 3, 2025
7a733bd
fix panicing metric thread issue
alec-flowers Jun 4, 2025
1bddc8e
remove verbose log
alec-flowers Jun 4, 2025
ee283cc
update v1 worker
alec-flowers Jun 4, 2025
183a8fe
put dp_rank in PreprocessedRequest
PeaBrane Jun 4, 2025
be7f951
new agg config
PeaBrane Jun 4, 2025
e1011d8
updated comments
PeaBrane Jun 4, 2025
5bf4fae
update v1 example
alec-flowers Jun 4, 2025
d6ded6c
final touches for it working with dp
alec-flowers Jun 4, 2025
61b94ac
Merge branch 'main' into rupei/router-general
alec-flowers Jun 4, 2025
9335efe
fix cost function trace
PeaBrane Jun 4, 2025
931b837
fmt
PeaBrane Jun 4, 2025
2a72271
Merge branch 'main' into rupei/router-general
PeaBrane Jun 4, 2025
eb7bb10
WIP document current work steps
alec-flowers Jun 5, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
WIP document current work steps
  • Loading branch information
alec-flowers committed Jun 5, 2025
commit eb7bb101f575e751d2f68af71775a3295040b433
91 changes: 29 additions & 62 deletions examples/vllm_v1/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,15 @@ limitations under the License.

# vLLM Deployment Examples

This directory contains examples for deploying vLLM models in both aggregated and disaggregated configurations.
This directory contains examples for deploying vLLM models aggregated with with DP.

## Prerequisites

1. Install vLLM:
```bash
# Note: Currently requires installation from main branch
# From vLLM 0.8.6 onwards, you can install directly from wheel
git clone https://github.com/vllm-project/vllm.git
VLLM_USE_PRECOMPILED=1 uv pip install --editable ./vllm/
cd vllm && git checkout d459fae0a2c464e28680bc6d564c1de1b295029e
VLLM_USE_PRECOMPILED=1 uv pip install --editable .
```

2. Start required services:
Expand All @@ -36,78 +35,46 @@ docker compose -f deploy/metrics/docker-compose.yml up -d

## Running the Server

### Aggregated Deployment
### Aggregated Deployment with Multiple disconnected DP engines

Serves the leader AsyncLLM engine + number of dp ranks you specify
```bash
cd examples/vllm_v1
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```

### Disaggregated Deployment
```bash
cd examples/vllm_v1
dynamo serve graphs.disagg:Frontend -f configs/disagg.yaml
To run other dp ranks headless on same node or other nodes can run

```
VLLM_LOGGING_LEVEL=DEBUG CUDA_VISIBLE_DEVICES=1 VLLM_USE_V1=1 vllm serve Qwen/Qwen3-0.6B -dp 1 -dpr 1 --data-parallel-address 127.0.0.1 --data-parallel-rpc-port 62300 --data-parallel-size-local 1 --enforce-eager --headless --kv-events-config '{"enable_kv_cache_events": true, "publisher": "zmq"}' --enable-prefix-caching
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

V1 and prefix caching are enabled by default

```

## Testing the API
To test can run this curl reqeust. KV Routing will mean this will keep routing to a single node, so you will need to switch it up to see routing to different dp workers.

Send a test request using curl:
```bash
curl localhost:8000/v1/completions \
```
curl localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"prompt": "In the heart of Eldoria...",
"stream": false,
"model": "Qwen/Qwen3-0.6B",
"messages": [
{
"role": "user",
"content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden."
}
],
"stream":false,
"max_tokens": 30
}'
```

For more detailed explenations, refer to the main [LLM examples README](../llm/README.md).



## Deepseek R1

To run DSR1 model please first follow the Ray setup from the [multinode documentation](../../docs/examples/multinode.md).

### Aggregated Deployment

```bash
cd examples/vllm_v1
dynamo serve graphs.agg:Frontend -f configs/deepseek_r1/agg.yaml
```


### Disaggregated Deployment
```

To create frontend with a single decode worker:
```bash
cd examples/vllm_v1
dynamo serve graphs.agg:Frontend -f configs/deepseek_r1/disagg.yaml
```

To create a single decode worker:
```bash
cd examples/vllm_v1
dynamo serve components.worker:VllmDecodeWorker -f configs/deepseek_r1/disagg.yaml
TODO:
- Currently if you run more than one instance or worker on the same node this will fail because the ZmqKvPublishers will overlap ports, need to add some port offsetting to manage that.
```

To create a single prefill worker:
```bash
cd examples/vllm_v1
dynamo serve components.worker:VllmPrefillWorker -f configs/deepseek_r1/disagg.yaml
ServiceArgs:
workers: 1 # 2 workers not supported
```
- It would be best to distill the vLLM serve into a VllmHeadlessWorker using - run_headless(self.engine_args). This is relatively simple, the main difficulty here is if you want to add the ZmqKvEventPublisher to these nodes (which would be easier for multi-node because then you just need to set-up nats and not worry about port stuff) they will have a different lease_id than the leader worker. This is a problem because we don't actually route requests to these dp_ranks directly but in the KV Router and KV Indexer it will see these KVEvents as coming from a seperate "worker". We still need to route the KVEvents through the leader AsyncLLM engine and that engine will take care of routing to the dp ranks.
- To address this we could create a concept of worker groups? IE components whose lease_ids are tied to a single leader worker?

## Testing

Send a test request using curl:
```bash
curl localhost:8000/v1/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-ai/DeepSeek-R1",
"prompt": "In the heart of Eldoria...",
"stream": false,
"max_tokens": 30
}'
```
For more detailed explenations, refer to the main [LLM examples README](../llm/README.md).
100 changes: 100 additions & 0 deletions examples/vllm_v1/components/headless_worker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# Work In Progress. This is not usable currently

import asyncio
import logging
import os
import signal
import socket
from typing import Optional

from utils.args import parse_vllm_args
from vllm import run_headless
from vllm.distributed.kv_events import KVEventsConfig

from dynamo.sdk import service

logger = logging.getLogger(__name__)

BLOCK_SIZE = 16


@service(
dynamo={
"enabled": True,
"namespace": "dynamo",
},
resources={"gpu": 1, "cpu": "10", "memory": "20Gi"},
workers=1,
)
class VllmHeadlessWorker:
def __init__(self):
class_name = self.__class__.__name__
self.engine_args = parse_vllm_args(class_name, "")
self.engine_args.kv_events_config = KVEventsConfig(
enable_kv_cache_events=True, publisher="zmq"
)
if not self.engine_args.block_size:
logger.info(f"block_size not set, default to {BLOCK_SIZE}")
self.engine_args.block_size = BLOCK_SIZE

os.environ["VLLM_NO_USAGE_STATS"] = "1" # Avoid internal HTTP requests

model_config = self.engine_args.create_model_config()
self.default_sampling_params = model_config.get_diff_sampling_param()

self.kv_publishers = []

signal.signal(signal.SIGTERM, self.shutdown_vllm_engine)
signal.signal(signal.SIGINT, self.shutdown_vllm_engine)

self.set_side_channel_host_and_port()

async def async_init(self):
run_headless(self.engine_args)

def shutdown_vllm_engine(self, signum, frame):
"""Shutdown the background loop"""
logger.info(f"Received signal {signum}, shutting down")
loop = asyncio.get_event_loop()
try:
self.engine_client.shutdown()
for publisher in self.kv_publishers:
publisher.shutdown()
logger.info("VllmWorker shutdown complete")
except Exception as e:
logger.error(f"Error during shutdown: {e}")
finally:
loop.stop()

def set_side_channel_host_and_port(
self, hostname: Optional[str] = None, port: Optional[int] = None
):
"""vLLM V1 NixlConnector creates a side channel to exchange metadata with other NIXL connectors.
This sets the port number for the side channel.
"""
if hostname is None:
hostname = socket.gethostname()
if port is None:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind(("", 0)) # Bind to a free port provided by the host.
port = s.getsockname()[1] # Get the port number assigned.
logger.debug("Setting VLLM_NIXL_SIDE_CHANNEL_HOST to %s", hostname)
os.environ["VLLM_NIXL_SIDE_CHANNEL_HOST"] = hostname
logger.debug("Setting VLLM_NIXL_SIDE_CHANNEL_PORT to %s", port)
os.environ["VLLM_NIXL_SIDE_CHANNEL_PORT"] = str(port)
19 changes: 10 additions & 9 deletions examples/vllm_v1/configs/agg.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,4 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# you may not use this file except in compliance with the License.More actions
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
Expand All @@ -14,7 +10,7 @@
# limitations under the License.
Common:
model: Qwen/Qwen3-0.6B
data-parallel-size: 2

block-size: 16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to set that?

max-model-len: 16384
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to set that?

served_model_name: Qwen/Qwen3-0.6B
Expand All @@ -27,9 +23,14 @@ VllmDecodeWorker:
enforce-eager: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to set that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah sorry. I think we pushed the changes to our config file for our local dev. We were hijacking your simple load balancer to do kv routing 😆, but those changes were not pushed. For now, we are still working on cleaning the python bits up.

max-num-batched-tokens: 16384
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need to set that?

enable-prefix-caching: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is enabled by default in V1

data-parallel-address: 127.0.0.1
data-parallel-rpc-port: 62300
data-parallel-size: 2
data-parallel-size-local: 1
# api-server-count: 2

ServiceArgs:
workers: 1 # 2 workers
resources:
gpu: 2 # 2 dp ranks
common-configs: [model, served_model_name, block-size, data-parallel-size, max-model-len]

gpu: 1 # 2 dp ranks
common-configs: [model, served_model_name, block-size, max-model-len]
6 changes: 3 additions & 3 deletions lib/llm/src/kv_router/indexer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ impl<T: WorkerGeneral> RadixTree<T> {
pub fn apply_event(&mut self, event: RouterEvent<T>) {
let (worker_id, event) = (event.worker, event.event);
let (id, op) = (event.event_id, event.data);
tracing::trace!(id, "Store operation: {:?}", op);
tracing::trace!(worker_id = ?worker_id, id=?id, "Store operation: {:?}", op);

let worker_lookup = self.lookup.entry(worker_id.clone()).or_default();

Expand All @@ -278,7 +278,7 @@ impl<T: WorkerGeneral> RadixTree<T> {
None => {
tracing::warn!(
worker_id = ?worker_id,
id,
id = ?id,
parent_hash = ?op.parent_hash,
"Failed to find parent block; skipping store operation"
);
Expand Down Expand Up @@ -332,7 +332,7 @@ impl<T: WorkerGeneral> RadixTree<T> {
None => {
tracing::warn!(
worker_id = ?worker_id,
id,
id = ?id,
"Failed to find block to remove; skipping remove operation"
);
continue;
Expand Down
Loading