Skip to content

Conversation

@richardhuo-nv
Copy link
Contributor

@richardhuo-nv richardhuo-nv commented Jun 6, 2025

Overview:

Add an example about how to turning on the MTP with DeepSeek R1 in Aggregated serving.

Details:

Add an example about how to turning on the MTP with DeepSeek R1 in Aggregated serving. The serving won't crash even when concurrency reach 256. There appears to be around a 10% TPS gain, but I learned the MTP need to be benchmarked with special datasets. That's our next step.

We are still working with Tensorrt LLM team to get a stable main build that can serve the disaggregated cases.

Summary by CodeRabbit

  • New Features

    • Added a new example for aggregated serving with Multi-Token Prediction (MTP) and DeepSeek R1, including detailed setup instructions and notes in the documentation.
    • Introduced configuration files for running DeepSeek-R1-FP4 with MTP, supporting advanced parallelism and GPU settings.
  • Documentation

    • Updated the README with a new section on aggregated serving, usage instructions, and important operational notes.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Jun 6, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 6, 2025

Walkthrough

A new example for aggregated serving using Multi-Token Prediction (MTP) with the DeepSeek R1 model has been added. This includes documentation updates and two new YAML configuration files specifying the serving architecture and runtime parameters for deploying the DeepSeek-R1-FP4 model with TensorRT LLM and MTP decoding.

Changes

File(s) Change Summary
examples/tensorrt_llm/README.md Added documentation for aggregated serving with MTP and DeepSeek R1, including usage instructions and notes.
examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg.yaml Introduced new service configuration YAML for aggregated serving with DeepSeek-R1-FP4 and MTP.
examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg_llm_api_config.yaml Added model and runtime parameter YAML for DeepSeek-R1-FP4 with TensorRT LLM and MTP decoding.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Frontend
    participant TensorRTLLMWorker

    User->>Frontend: Send inference request
    Frontend->>TensorRTLLMWorker: Route request (round robin)
    TensorRTLLMWorker->>TensorRTLLMWorker: Run DeepSeek-R1-FP4 with MTP decoding
    TensorRTLLMWorker->>Frontend: Return prediction
    Frontend->>User: Return response
Loading

Poem

In the meadow of models, a new path we pave,
With DeepSeek and MTP, our tokens behave.
YAMLs now guide us, configs in a row,
Aggregated serving, let the predictions flow!
🐇✨
The rabbits rejoice—another leap in AI's show!


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@richardhuo-nv richardhuo-nv force-pushed the rihuo/add_agg_mtp_example branch from 7ceafe7 to 7f41cdb Compare June 6, 2025 23:33
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
examples/tensorrt_llm/README.md (2)

121-125: Ensure consistency in formatting and path references

  • Add a space before the parentheses in the heading: Multi-Token Prediction (MTP)
  • Prefix the config path with ./ to match other examples
-#### Aggregated serving with Multi-Token Prediction(MTP) and DeepSeek R1
+#### Aggregated serving with Multi-Token Prediction (MTP) and DeepSeek R1

-```bash
-dynamo serve graphs.agg:Frontend -f configs/deepseek_r1/mtp/mtp_agg.yaml
-```
+```bash
+dynamo serve graphs.agg:Frontend -f ./configs/deepseek_r1/mtp/mtp_agg.yaml
+```

126-131: Enhance notes with performance insights and formatting consistency

  • Add a bullet on observed stability and throughput gains:
    - Aggregated MTP serving remains stable up to 256 concurrency and yields ~10% TPS improvement.
  • Wrap the cuda_graph_padding_enabled: false setting in backticks for clarity.
examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg.yaml (1)

22-30: Use consistent path notation for engine_args
Other examples use ./configs/... relative paths. Consider updating to:

-TensorRTLLMWorker:
-  engine_args: "configs/deepseek_r1/mtp/mtp_agg_llm_api_config.yaml"
+TensorRTLLMWorker:
+  engine_args: "./configs/deepseek_r1/mtp/mtp_agg_llm_api_config.yaml"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2019a7d and 4f115f4.

📒 Files selected for processing (3)
  • examples/tensorrt_llm/README.md (1 hunks)
  • examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg.yaml (1 hunks)
  • examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg_llm_api_config.yaml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: Build and Test - vllm
🔇 Additional comments (5)
examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg.yaml (2)

1-15: License header is in order
The Apache-2.0 license block is correctly applied.


16-21: Frontend configuration looks good
The served model name, endpoint, port, and router settings align with other examples.

examples/tensorrt_llm/configs/deepseek_r1/mtp/mtp_agg_llm_api_config.yaml (3)

1-15: License header is correct
The Apache-2.0 block and SPDX tags are properly included.


16-32: Engine hyperparameters are appropriately set
Model, parallelism, batch size, and KV cache configurations align with FP4 and MTP requirements.


33-53: MTP decoding and CUDA graph settings are properly configured
Speculative decoding is enabled, and the workaround for the known cuda_graph_padding_enabled bug is documented.

@richardhuo-nv richardhuo-nv merged commit 4de7f44 into main Jun 7, 2025
8 checks passed
@richardhuo-nv richardhuo-nv deleted the rihuo/add_agg_mtp_example branch June 7, 2025 00:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants