Skip to content

Conversation

@QiJune
Copy link
Collaborator

@QiJune QiJune commented Aug 11, 2025

Summary by CodeRabbit

  • New Features

    • Added CUDA graph–based execution with automatic batch-size padding to boost throughput and reduce latency, with safe fallback to standard execution when ineligible.
    • Improved warmup flow and resource cleanup for more consistent performance.
  • Refactor

    • Centralized CUDA graph handling into a dedicated engine, simplifying execution paths and improving reliability during inference.
  • Chores

    • Updated a third-party dependency.

QiJune added 5 commits July 30, 2025 17:32
Signed-off-by: junq <[email protected]>
Signed-off-by: junq <[email protected]>
Signed-off-by: junq <[email protected]>
Signed-off-by: junq <[email protected]>
@QiJune QiJune requested a review from a team as a code owner August 11, 2025 00:52
@QiJune QiJune requested a review from achartier August 11, 2025 00:52
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 11, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Introduces a new CUDA graph execution engine for PyTorch LLMs, integrates it into the existing model engine, updates warmup/forward/cleanup paths to delegate CUDA-graph handling to the engine, removes prior bespoke graph logic, and updates a third-party submodule pointer.

Changes

Cohort / File(s) Summary of changes
Submodule update
3rdparty/xgrammar
Updated submodule reference to a new commit; no in-repo code changes.
New CUDA graph engine
tensorrt_llm/_torch/pyexecutor/cuda_graph_model_engine.py
Added CUDAGraphModelEngine with capture/replay per batch size, padding helper, resource cleanup, global capture toggles, and constants. Exposes execute, pad_batch (context manager), clear, and capture state APIs.
Model engine integration
tensorrt_llm/_torch/pyexecutor/model_engine.py
Replaced old CUDA-graph runner with CUDAGraphModelEngine. Updated warmup, forward path, padding, and cleanup to delegate to the new engine; removed prior CUDA-graph helper methods.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant PyTorchModelEngine as ModelEngine
  participant CUDAGraphModelEngine as CudaGraphEngine
  participant CUDA as CUDA Graph

  Client->>ModelEngine: forward(batch, inputs)
  ModelEngine->>CudaGraphEngine: pad_batch(scheduled_requests)
  CudaGraphEngine-->>ModelEngine: padded batch (context)
  ModelEngine->>CudaGraphEngine: execute(batch, inputs, forward_fn)
  alt first run or missing graph
    CudaGraphEngine->>CUDA: capture(warmup + forward)
    CUDA-->>CudaGraphEngine: graph handle + outputs ref
  else replay
    CudaGraphEngine->>CUDA: replay()
    CUDA-->>CudaGraphEngine: outputs
  end
  CudaGraphEngine-->>ModelEngine: graph_output or None
  alt got graph_output
    ModelEngine-->>Client: graph_output
  else fallback
    ModelEngine->>ModelEngine: eager forward_fn()
    ModelEngine-->>Client: eager output
  end
Loading
sequenceDiagram
  participant Control as Control Flow
  participant ModelEngine
  participant CudaGraphEngine

  Control->>ModelEngine: warmup()
  ModelEngine->>CudaGraphEngine: execute(warmup batch, inputs, forward_fn)
  CudaGraphEngine->>CudaGraphEngine: capture graph for batch size
  CudaGraphEngine-->>ModelEngine: output (ignored/validated)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~35 minutes

Possibly related PRs

Suggested reviewers

  • pcastonguay
  • HuiGao-NV
  • nv-guomingz
  • mikeiovine
  • leslie-fang25

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60073a7 and 3841576.

📒 Files selected for processing (3)
  • 3rdparty/xgrammar (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/cuda_graph_model_engine.py (1 hunks)
  • tensorrt_llm/_torch/pyexecutor/model_engine.py (5 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@QiJune QiJune closed this Aug 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant