Skip to content

[None][fix] fix runtime error that bf16 input is not quantized to nvfp4 when use bf16 dispatch#8507

Merged
yilin-void merged 1 commit intoNVIDIA:mainfrom
yilin-void:fix/bf16_dispatch
Oct 30, 2025
Merged

[None][fix] fix runtime error that bf16 input is not quantized to nvfp4 when use bf16 dispatch#8507
yilin-void merged 1 commit intoNVIDIA:mainfrom
yilin-void:fix/bf16_dispatch

Conversation

@yilin-void
Copy link
Collaborator

@yilin-void yilin-void commented Oct 20, 2025

Fix a runtime error where the bf16 tensor is not quantized to an nvfp4 tensor when using bf16 dispatch.

Summary by CodeRabbit

  • Refactor
    • Simplified and optimized the fused mixture of experts module for improved efficiency when processing FP4 quantized tensors. Consolidated handling of quantized and non-quantized inputs into a more streamlined code path.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@yilin-void yilin-void self-assigned this Oct 20, 2025
@yilin-void yilin-void requested a review from a team as a code owner October 20, 2025 13:42
@yilin-void yilin-void requested a review from HuiGao-NV October 20, 2025 13:42
@yilin-void
Copy link
Collaborator Author

/bot run

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 20, 2025

📝 Walkthrough

Walkthrough

The forward_chunk function in the nvfp4 branch of fused_moe_wide_ep.py has been refactored to consolidate logic by replacing previous conditional branches (based on use_allgather or use_postquant_alltoall) with a unified structure that handles Fp4QuantizedTensor and non-quantized inputs separately but converges on consistent subsequent processing, eliminating code duplication.

Changes

Cohort / File(s) Summary
Streamlined nvfp4 forward chunk logic
tensorrt_llm/_torch/modules/fused_moe/fused_moe_wide_ep.py
Refactored forward_chunk's nvfp4 branch to replace use_allgather/use_postquant_alltoall conditional paths with a unified if-else structure: Fp4QuantizedTensor inputs are unpacked directly with swizzle assertion and dimension computation; non-quantized inputs are quantized via torch.ops.trtllm.fp4_quantize. Both paths converge on x_sf reshaping and subsequent processing. Removes previous code duplication while maintaining functional equivalence.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description Check ⚠️ Warning The pull request description is largely incomplete against the required template. While a brief issue statement is provided ("Fix a runtime error where the bf16 tensor is not quantized to an nvfp4 tensor when using bf16 dispatch"), the Description section contains only a single line that does not adequately explain the root cause or the solution implementation. The Test Coverage section is entirely empty with no mention of which tests validate this fix. The PR Checklist items are not individually addressed or marked, providing no clarity on coding guidelines compliance, test coverage verification, or other checklist requirements. The description relies primarily on template placeholders rather than substantive information needed for code review. The author should expand the Description section to explain the root cause of the bf16 quantization issue and detail how the code changes in fused_moe_wide_ep.py address it. The Test Coverage section must be populated with specific test names or test scenarios that validate the fix. Individual PR Checklist items should be reviewed and either marked as completed or noted as not applicable with justification. This will provide reviewers with sufficient context to understand the motivation and validation of the changes.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The PR title "[None][fix] fix runtime error that bf16 input is not quantized to nvfp4 when use bf16 dispatch" is clearly related to the main change in the pull request. According to the raw_summary, the modification streamlines the handling of both Fp4QuantizedTensor and non-quantized inputs in the nvfp4 branch of forward_chunk, specifically ensuring that non-quantized inputs (which would include bf16) are properly quantized using torch.ops.trtllm.fp4_quantize. The title accurately captures the intent of the fix—ensuring bf16 inputs are correctly quantized to nvfp4 during dispatch—which aligns with the code changes described. The title is specific, concise, and directly conveys the primary purpose of the change.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21920 [ run ] triggered by Bot. Commit: 1021aa2

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21920 [ run ] completed with state SUCCESS. Commit: 1021aa2
/LLM/main/L0_MergeRequest_PR pipeline #16524 completed with status: 'FAILURE'

@yilin-void
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21992 [ run ] triggered by Bot. Commit: 2a7b711

@tensorrt-cicd
Copy link
Collaborator

PR_Github #21992 [ run ] completed with state SUCCESS. Commit: 2a7b711
/LLM/main/L0_MergeRequest_PR pipeline #16582 completed with status: 'FAILURE'

@yilin-void
Copy link
Collaborator Author

/bot run

@yilin-void
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22278 [ run ] triggered by Bot. Commit: 24c13e0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22277 [ ] completed with state ABORTED. Commit: 24c13e0

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22278 [ run ] completed with state FAILURE. Commit: 24c13e0
/LLM/main/L0_MergeRequest_PR pipeline #16797 completed with status: 'FAILURE'

@yilin-void
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22390 [ run ] triggered by Bot. Commit: 09c1e1b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22390 [ run ] completed with state SUCCESS. Commit: 09c1e1b
/LLM/main/L0_MergeRequest_PR pipeline #16874 completed with status: 'FAILURE'

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
@yilin-void
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22616 [ run ] triggered by Bot. Commit: 90158f9

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22616 [ run ] completed with state SUCCESS. Commit: 90158f9
/LLM/main/L0_MergeRequest_PR pipeline #17049 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@yilin-void yilin-void changed the title [None][fix] fix runtime error when use bf16 dispatch [None][fix] fix runtime error that will not quantize the bf16 tensor to nvfp4 tensor when use bf16 dispatch Oct 30, 2025
@yilin-void yilin-void changed the title [None][fix] fix runtime error that will not quantize the bf16 tensor to nvfp4 tensor when use bf16 dispatch [None][fix] fix runtime error that bf16 input is not quantized to nvfp4 when use bf16 dispatch Oct 30, 2025
Copy link
Collaborator

@HuiGao-NV HuiGao-NV left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yilin-void yilin-void merged commit 6b755fd into NVIDIA:main Oct 30, 2025
11 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
…p4 when use bf16 dispatch (NVIDIA#8507)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…p4 when use bf16 dispatch (NVIDIA#8507)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…p4 when use bf16 dispatch (NVIDIA#8507)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
…p4 when use bf16 dispatch (NVIDIA#8507)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
fredricz-20070104 pushed a commit to fredricz-20070104/TensorRT-LLM that referenced this pull request Nov 5, 2025
…p4 when use bf16 dispatch (NVIDIA#8507)

Signed-off-by: Yilin Zhang <18275976+yilin-void@users.noreply.github.com>
Signed-off-by: FredricZ-2007 <226039983+fredricz-20070104@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants