diff --git a/examples/llm/README.md b/examples/llm/README.md index d17aa94e48..af3415ad44 100644 --- a/examples/llm/README.md +++ b/examples/llm/README.md @@ -225,7 +225,8 @@ dynamo deployment create $DYNAMO_TAG -n $DEPLOYMENT_NAME -f ./configs/agg.yaml ### Testing the Deployment -Once the deployment is complete, you can test it using: +Once the deployment is complete, you can test it. If you have ingress available for your deployment, you can directly call the url returned +in `dynamo deployment get ${DEPLOYMENT_NAME}` and skip the steps to find and forward the frontend pod. ```bash # Find your frontend pod diff --git a/examples/llm/configs/disagg.yaml b/examples/llm/configs/disagg.yaml index 77f405a9f9..e746143316 100644 --- a/examples/llm/configs/disagg.yaml +++ b/examples/llm/configs/disagg.yaml @@ -26,6 +26,7 @@ Frontend: Processor: router: round-robin common-configs: [model, block-size] + prompt-template: "USER: \n ASSISTANT:" VllmWorker: remote-prefill: true diff --git a/examples/multimodal/README.md b/examples/multimodal/README.md index be2ce56f97..76f5e843ce 100644 --- a/examples/multimodal/README.md +++ b/examples/multimodal/README.md @@ -28,16 +28,16 @@ The examples are based on the [llava-1.5-7b-hf](https://huggingface.co/llava-hf/ - processor: Tokenizes the prompt and passes it to the decode worker. - frontend: HTTP endpoint to handle incoming requests. -### Deployment +### Graph -In this deployment, we have two workers, [encode_worker](components/encode_worker.py) and [decode_worker](components/decode_worker.py). +In this graph, we have two workers, [encode_worker](components/encode_worker.py) and [decode_worker](components/decode_worker.py). The encode worker is responsible for encoding the image and passing the embeddings to the decode worker via a combination of NATS and RDMA. The work complete event is sent via NATS, while the embeddings tensor is transferred via RDMA through the NIXL interface. Its decode worker then prefills and decodes the prompt, just like the [LLM aggregated serving](../llm/README.md) example. By separating the encode from the prefill and decode stages, we can have a more flexible deployment and scale the encode worker independently from the prefill and decode workers if needed. -This figure shows the flow of the deployment: +This figure shows the flow of the graph: ```mermaid flowchart LR HTTP --> processor @@ -89,7 +89,7 @@ You should see a response similar to this: {"id": "c37b946e-9e58-4d54-88c8-2dbd92c47b0c", "object": "chat.completion", "created": 1747725277, "model": "llava-hf/llava-1.5-7b-hf", "choices": [{"index": 0, "message": {"role": "assistant", "content": " In the image, there is a city bus parked on a street, with a street sign nearby on the right side. The bus appears to be stopped out of service. The setting is in a foggy city, giving it a slightly moody atmosphere."}, "finish_reason": "stop"}]} ``` -## Multimodal Disaggregated serving +## Multimodal Disaggregated Serving ### Components @@ -97,16 +97,16 @@ You should see a response similar to this: - processor: Tokenizes the prompt and passes it to the decode worker. - frontend: HTTP endpoint to handle incoming requests. -### Deployment +### Graph -In this deployment, we have three workers, [encode_worker](components/encode_worker.py), [decode_worker](components/decode_worker.py), and [prefill_worker](components/prefill_worker.py). +In this graph, we have three workers, [encode_worker](components/encode_worker.py), [decode_worker](components/decode_worker.py), and [prefill_worker](components/prefill_worker.py). For the Llava model, embeddings are only required during the prefill stage. As such, the encode worker is connected directly to the prefill worker. The encode worker is responsible for encoding the image and passing the embeddings to the prefill worker via a combination of NATS and RDMA. Its work complete event is sent via NATS, while the embeddings tensor is transferred via RDMA through the NIXL interface. The prefill worker performs the prefilling step and forwards the KV cache to the decode worker for decoding. For more details on the roles of the prefill and decode workers, refer to the [LLM disaggregated serving](../llm/README.md) example. -This figure shows the flow of the deployment: +This figure shows the flow of the graph: ```mermaid flowchart LR HTTP --> processor @@ -158,3 +158,82 @@ You should see a response similar to this: ```json {"id": "c1774d61-3299-4aa3-bea1-a0af6c055ba8", "object": "chat.completion", "created": 1747725645, "model": "llava-hf/llava-1.5-7b-hf", "choices": [{"index": 0, "message": {"role": "assistant", "content": " This image shows a passenger bus traveling down the road near power lines and trees. The bus displays a sign that says \"OUT OF SERVICE\" on its front."}, "finish_reason": "stop"}]} ``` + +## Deployment with Dynamo Operator + +These multimodal examples can be deployed to a Kubernetes cluster using [Dynamo Cloud](../../docs/guides/dynamo_deploy/dynamo_cloud.md) and the Dynamo CLI. + +### Prerequisites + +You must have first followed the instructions in [deploy/cloud/helm/README.md](../../deploy/cloud/helm/README.md) to install Dynamo Cloud on your Kubernetes cluster. + +**Note**: The `KUBE_NS` variable in the following steps must match the Kubernetes namespace where you installed Dynamo Cloud. You must also expose the `dynamo-store` service externally. This will be the endpoint the CLI uses to interface with Dynamo Cloud. + +### Deployment Steps + +For detailed deployment instructions, please refer to the [Operator Deployment Guide](../../docs/guides/dynamo_deploy/operator_deployment.md). The following are the specific commands for the multimodal examples: + +```bash +# Set your project root directory +export PROJECT_ROOT=$(pwd) + +# Configure environment variables (see operator_deployment.md for details) +export KUBE_NS=dynamo-cloud +export DYNAMO_CLOUD=http://localhost:8080 # If using port-forward +# OR +# export DYNAMO_CLOUD=https://dynamo-cloud.nvidia.com # If using Ingress/VirtualService + +# Build the Dynamo base image (see operator_deployment.md for details) +export DYNAMO_IMAGE=/: + +# Build the service +cd $PROJECT_ROOT/examples/multimodal +DYNAMO_TAG=$(dynamo build graphs.agg:Frontend | grep "Successfully built" | awk '{ print $NF }' | sed 's/\.$//') +# For disaggregated serving: +# DYNAMO_TAG=$(dynamo build graphs.disagg:Frontend | grep "Successfully built" | awk '{ print $NF }' | sed 's/\.$//') + +# Deploy to Kubernetes +export DEPLOYMENT_NAME=multimodal-agg +# For aggregated serving: +dynamo deploy $DYNAMO_TAG -n $DEPLOYMENT_NAME -f ./configs/agg.yaml +# For disaggregated serving: +# export DEPLOYMENT_NAME=multimodal-disagg +# dynamo deploy $DYNAMO_TAG -n $DEPLOYMENT_NAME -f ./configs/disagg.yaml +``` + +**Note**: To avoid rate limiting from unauthenticated requests to HuggingFace (HF), you can provide your `HF_TOKEN` as a secret in your deployment. See the [operator deployment guide](../../docs/guides/dynamo_deploy/operator_deployment.md#referencing-secrets-in-your-deployment) for instructions on referencing secrets like `HF_TOKEN` in your deployment configuration. + +**Note**: Optionally add `--Planner.no-operation=false` at the end of the deployment command to enable the planner component to take scaling actions on your deployment. + +### Testing the Deployment + +Once the deployment is complete, you can test it. If you have ingress available for your deployment, you can directly call the url returned +in `dynamo deployment get ${DEPLOYMENT_NAME}` and skip the steps to find and forward the frontend pod. + +```bash +# Find your frontend pod +export FRONTEND_POD=$(kubectl get pods -n ${KUBE_NS} | grep "${DEPLOYMENT_NAME}-frontend" | sort -k1 | tail -n1 | awk '{print $1}') + +# Forward the pod's port to localhost +kubectl port-forward pod/$FRONTEND_POD 8000:8000 -n ${KUBE_NS} + +# Test the API endpoint +curl localhost:8000/v1/chat/completions \ + -H "Content-Type: application/json" \ + -d '{ + "model": "llava-hf/llava-1.5-7b-hf", + "messages": [ + { + "role": "user", + "content": [ + { "type": "text", "text": "What is in this image?" }, + { "type": "image_url", "image_url": { "url": "http://images.cocodataset.org/test2017/000000155781.jpg" } } + ] + } + ], + "max_tokens": 300, + "stream": false + }' +``` + +For more details on managing deployments, testing, and troubleshooting, please refer to the [Operator Deployment Guide](../../docs/guides/dynamo_deploy/operator_deployment.md). diff --git a/examples/multimodal/components/decode_worker.py b/examples/multimodal/components/decode_worker.py index 1f97e0c7b2..59ac84a162 100644 --- a/examples/multimodal/components/decode_worker.py +++ b/examples/multimodal/components/decode_worker.py @@ -135,8 +135,10 @@ async def async_init(self): self.disaggregated_router = None model = LlavaForConditionalGeneration.from_pretrained( - self.engine_args.model - ) + self.engine_args.model, + device_map="auto", + torch_dtype=torch.bfloat16, + ).eval() vision_tower = model.vision_tower self.embedding_size = ( vision_tower.vision_model.embeddings.position_embedding.num_embeddings diff --git a/examples/multimodal/components/prefill_worker.py b/examples/multimodal/components/prefill_worker.py index b0f5b45f66..f1f34c9499 100644 --- a/examples/multimodal/components/prefill_worker.py +++ b/examples/multimodal/components/prefill_worker.py @@ -246,8 +246,8 @@ async def generate(self, request: RemotePrefillRequest): self._loaded_metadata.add(engine_id) # To make sure the decode worker can pre-allocate the memory with the correct size for the prefill worker to transfer the kv cache, - # some placeholder dummy tokens were inserted based on the embedding size in the worker.py. - # The structure of the prompt is "\nUSER: \n\nASSISTANT:", need to remove the dummy tokens after the image token. + # some placeholder dummy tokens are inserted based on the embedding size in the worker.py. + # TODO: make this more flexible/model-dependent IMAGE_TOKEN_ID = 32000 embedding_size = embeddings.shape[1] padding_size = embedding_size - 1 diff --git a/examples/multimodal/components/processor.py b/examples/multimodal/components/processor.py index 89ffb786a1..b1628a63a4 100644 --- a/examples/multimodal/components/processor.py +++ b/examples/multimodal/components/processor.py @@ -188,11 +188,12 @@ async def _generate_responses( # The generate endpoint will be used by the frontend to handle incoming requests. @endpoint() async def generate(self, raw_request: MultiModalRequest): + prompt = str(self.engine_args.prompt_template).replace( + "", raw_request.messages[0].content[0].text + ) msg = { "role": "user", - "content": "USER: \nQuestion:" - + raw_request.messages[0].content[0].text - + " Answer:", + "content": prompt, } chat_request = ChatCompletionRequest( diff --git a/examples/multimodal/configs/agg.yaml b/examples/multimodal/configs/agg.yaml index b1b2620056..344a6e46c1 100644 --- a/examples/multimodal/configs/agg.yaml +++ b/examples/multimodal/configs/agg.yaml @@ -19,6 +19,7 @@ Common: Processor: router: round-robin + prompt-template: "USER: \n ASSISTANT:" common-configs: [model, block-size, max-model-len] VllmDecodeWorker: @@ -30,7 +31,7 @@ VllmDecodeWorker: ServiceArgs: workers: 1 resources: - gpu: 1 + gpu: '1' common-configs: [model, block-size, max-model-len] VllmEncodeWorker: @@ -39,5 +40,5 @@ VllmEncodeWorker: ServiceArgs: workers: 1 resources: - gpu: 1 + gpu: '1' common-configs: [model] diff --git a/examples/multimodal/configs/disagg.yaml b/examples/multimodal/configs/disagg.yaml index e6dcdb11b6..6c6fbbb200 100644 --- a/examples/multimodal/configs/disagg.yaml +++ b/examples/multimodal/configs/disagg.yaml @@ -20,6 +20,7 @@ Common: Processor: router: round-robin + prompt-template: "USER: \n ASSISTANT:" common-configs: [model, block-size] VllmDecodeWorker: @@ -30,7 +31,7 @@ VllmDecodeWorker: ServiceArgs: workers: 1 resources: - gpu: 1 + gpu: '1' common-configs: [model, block-size, max-model-len, kv-transfer-config] VllmPrefillWorker: @@ -38,7 +39,7 @@ VllmPrefillWorker: ServiceArgs: workers: 1 resources: - gpu: 1 + gpu: '1' common-configs: [model, block-size, max-model-len, kv-transfer-config] VllmEncodeWorker: @@ -47,5 +48,5 @@ VllmEncodeWorker: ServiceArgs: workers: 1 resources: - gpu: 1 + gpu: '1' common-configs: [model] diff --git a/examples/multimodal/utils/vllm.py b/examples/multimodal/utils/vllm.py index bbb489757f..7b6b1d888c 100644 --- a/examples/multimodal/utils/vllm.py +++ b/examples/multimodal/utils/vllm.py @@ -51,6 +51,12 @@ def parse_vllm_args(service_name, prefix) -> AsyncEngineArgs: default=3, help="Maximum queue size for remote prefill. If the prefill queue size is greater than this value, prefill phase of the incoming request will be executed locally.", ) + parser.add_argument( + "--prompt-template", + type=str, + default="", + help="Prompt template to use for the model", + ) parser = AsyncEngineArgs.add_cli_args(parser) args = parser.parse_args(vllm_args) engine_args = AsyncEngineArgs.from_cli_args(args) @@ -59,4 +65,5 @@ def parse_vllm_args(service_name, prefix) -> AsyncEngineArgs: engine_args.conditional_disagg = args.conditional_disagg engine_args.max_local_prefill_length = args.max_local_prefill_length engine_args.max_prefill_queue_size = args.max_prefill_queue_size + engine_args.prompt_template = args.prompt_template return engine_args