Skip to content

Conversation

@sorenwu
Copy link
Contributor

@sorenwu sorenwu commented Aug 12, 2025

Summary by CodeRabbit

  • New Features

    • Optional chat-template preprocessing via CLI flag in the advanced quickstart.
    • Support for dynamic RoPE scaling with alpha; expanded RoPE configuration.
    • QK normalization options (none/pre/post RoPE) in attention.
    • New models available: HunYuanMoEV1ForCausalLM and Gemma3Model.
    • Tokenizer helper to retrieve chat templates.
  • Bug Fixes

    • More robust gating for Flash-MLA enablement to avoid None-related errors.
    • Safer MLA detection without assertions on missing fields.
    • Tokenizer loading now logs failures and degrades gracefully.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 12, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds a CLI flag to optionally template prompts as chat, introduces QK normalization controls in attention, extends RoPE with alpha and dynamic scaling, tightens MLA gating conditions, exposes a new HunYuan MoE causal LM model, updates model exports, and adds tokenizer chat-template access with improved error handling.

Changes

Cohort / File(s) Summary of changes
CLI prompt templating
examples/llm-api/quickstart_advanced.py
Adds --apply_chat_template flag; when set, converts prompts to chat format using tokenizer.apply_chat_template before generation; default behavior unchanged.
Tokenizer chat template API
tensorrt_llm/llmapi/tokenizer.py
Adds TransformersTokenizer.get_chat_template wrapper; improves HF tokenizer loading by logging exceptions and returning None on failure.
Attention and RoPE updates
tensorrt_llm/_torch/attention_backend/interface.py, tensorrt_llm/_torch/modules/attention.py, tensorrt_llm/functional.py
Adds RopeParams.alpha and propagates it; introduces QkNormType enum and qk_norm_type handling (pre/post-RoPE) with apply_qk_norm hook; adds dynamic RoPE scaling requiring alpha in create_sinusoidal_positions_for_attention_plugin.
MLA gating robustness
tensorrt_llm/_torch/model_config.py, tensorrt_llm/_torch/pyexecutor/config_utils.py
Tightens enable_flash_mla/is_mla checks to require non-None kv_lora_rank and qk_rope_head_dim; removes assertion path.
Public model exports
tensorrt_llm/_torch/models/__init__.py
Exposes HunYuanMoEV1ForCausalLM and Gemma3Model in all; imports HunYuanMoEV1ForCausalLM.
New HunYuan MoE model
tensorrt_llm/_torch/models/modeling_hunyuan_moe.py
Adds HunYuanMoE, HunYuanAttention, HunYuanDecoderLayer, HunYuanModel, and HunYuanMoEV1ForCausalLM with weight-loading logic and MoE/attention integrations; registers auto model.

Sequence Diagram(s)

sequenceDiagram
  participant User
  participant CLI as quickstart_advanced.py
  participant LLM
  participant Tok as Tokenizer
  User->>CLI: Run with/without --apply_chat_template
  CLI->>CLI: Build prompts list
  alt apply_chat_template=True
    CLI->>Tok: apply_chat_template(prompts)
    Tok-->>CLI: templated_prompts
    CLI->>LLM: generate(templated_prompts)
  else
    CLI->>LLM: generate(prompts)
  end
  LLM-->>User: outputs
Loading
sequenceDiagram
  participant Attn as Attention.forward
  participant Rope as RoPE
  participant QKN as QK Norm
  Attn->>Attn: split qkv
  alt qk_norm_type == pre_rope
    Attn->>QKN: apply_qk_norm(q,k) pre-RoPE
  end
  alt rope not fused and position_ids provided
    Attn->>Rope: apply_rope(q,k,v)
    alt qk_norm_type == post_rope
      Attn->>QKN: apply_qk_norm(q,k) post-RoPE
    end
  end
  Attn-->>Attn: proceed with attention scores
Loading
sequenceDiagram
  participant App
  participant HY as HunYuanMoEV1ForCausalLM
  participant Core as HunYuanModel
  participant Layer as DecoderLayer(MoE/MLP, Attn)
  App->>HY: forward(input_ids, position_ids, attn_metadata)
  HY->>Core: forward(...)
  loop for each layer
    Core->>Layer: attention + MLP/MoE
    Layer-->>Core: hidden_states
  end
  Core-->>HY: final hidden_states
  HY-->>App: logits / context_logits
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

Community want to contribute

Suggested reviewers

  • shaharmor98
  • nv-yilinf
  • byshiue
  • chuangz0
  • Superjomn

📜 Recent review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 27fc351 and 0ac773c.

📒 Files selected for processing (9)
  • examples/llm-api/quickstart_advanced.py (2 hunks)
  • tensorrt_llm/_torch/attention_backend/interface.py (3 hunks)
  • tensorrt_llm/_torch/model_config.py (1 hunks)
  • tensorrt_llm/_torch/models/__init__.py (2 hunks)
  • tensorrt_llm/_torch/models/modeling_hunyuan_moe.py (1 hunks)
  • tensorrt_llm/_torch/modules/attention.py (8 hunks)
  • tensorrt_llm/_torch/pyexecutor/config_utils.py (1 hunks)
  • tensorrt_llm/functional.py (1 hunks)
  • tensorrt_llm/llmapi/tokenizer.py (2 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@sorenwu sorenwu closed this Aug 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants