Skip to content

Conversation

@rosenrodt
Copy link
Collaborator

@rosenrodt rosenrodt commented Oct 6, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added SwiGlu-based post-processing support for GEMM operations with configurable parameters.
    • Implemented multi-tile configuration support for MOE runners with dynamic tile selection.
    • Enabled multiple MOE backend support (TRTLLM and CUTLASS) for improved compatibility.
  • Improvements

    • Enhanced kernel scheduling with removal of runtime restrictions and refined validation logic.
    • Optimized per-GEMM token allocation for more efficient memory utilization.
    • Improved autotuning with realistic tensor generation for more accurate performance measurements.
  • Tests

    • Expanded test coverage for multi-backend MOE operations and backend-specific configurations.

Description

  • More performant kernels. mxfp8 x mxfp4 sees the most boost from more kernel configs. Other precisions might observe slight increase in perf from optimized TMA load/store.
  • Autotuner chooses from multiple runner instances of varying tileN sizes. Was choosing from heursitically determined tileN. WARNING: this increases autotuning time.
  • Autotuner must tune with non-0 value tensor. randint(-5, 5) appears to report benchmark result more accurately than randn().

GPT-OSS-120b TP1

Baseline Updated cubin Update cubin + autotune tileN with randint()
Concurrency TPS/user TPS/user Gain over baseline TPS/user Gain over baseline
1 404.158 406.8676 1.01 397.50 0.98
4 291.7377 283.3793 0.97 292.608 1.00
8 216.3125 217.2664 1.00 232.50 1.07
16 176.0417 177.4137 1.01 183.84 1.04
32 133.4404 134.2653 1.01 140.10 1.05
64 100.7758 101.5539 1.01 110.70 1.10
128 71.9943 72.9801 1.01 81.49 1.13
256 51.6263 56.8874 1.10 55.43 1.07
512 33.9069 36.7009 1.08 35.99 1.06
1024 18.7652 20.2948 1.08 19.80 1.06

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@rosenrodt rosenrodt changed the title [None][perf] Update TRTLLM MoE MxFP4 cubins [None][feat] Update TRTLLM MoE MxFP4 cubins Oct 6, 2025
@rosenrodt rosenrodt changed the title [None][feat] Update TRTLLM MoE MxFP4 cubins [None][feat] Update TRTLLM MoE MxFP4 cubins Oct 6, 2025
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20675 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20675 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15619 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from cfba343 to af27ef9 Compare October 6, 2025 16:52
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20681 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20681 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #15624 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from af27ef9 to 6fd1909 Compare October 7, 2025 01:19
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20694 [ run ] triggered by Bot

@rosenrodt
Copy link
Collaborator Author

/bot kill

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20735 [ kill ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20694 [ run ] completed with state ABORTED
LLM/main/L0_MergeRequest_PR #15632 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20735 [ kill ] completed with state SUCCESS
Successfully killed previous jobs for commit 6fd1909

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from 6fd1909 to e94659b Compare October 7, 2025 15:28
@rosenrodt rosenrodt requested review from a team as code owners October 7, 2025 15:28
@rosenrodt rosenrodt requested review from liji-nv and yuxianq October 7, 2025 15:28
@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch 2 times, most recently from fa1783c to bd830ef Compare October 7, 2025 16:58
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20740 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20740 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15672 completed with status: 'FAILURE'

@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from bd830ef to 729d3b5 Compare October 8, 2025 03:47
@rosenrodt
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20762 [ run ] triggered by Bot

@rosenrodt
Copy link
Collaborator Author

/bot kill

…rtllm batchedGemm config.json

Signed-off-by: Anthony Chang <[email protected]>
…input for better result

tune tileN for all trtllm moe ops; remove imbalance_factor from interface

reduce test size

Signed-off-by: Anthony Chang <[email protected]>
disable tuning multiple runners in fp8 block scale moe for now

Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
Signed-off-by: Anthony Chang <[email protected]>
@rosenrodt rosenrodt force-pushed the update-trtllm-moe-cubins branch from 7c6875a to 883887e Compare October 22, 2025 01:13
@longlee0622
Copy link
Collaborator

/bot run --disable-fail-fast

1 similar comment
@rosenrodt
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22104 [ run ] triggered by Bot. Commit: 883887e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22105 [ run ] triggered by Bot. Commit: 883887e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22105 [ run ] completed with state ABORTED. Commit: 883887e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22104 [ run ] completed with state SUCCESS. Commit: 883887e
/LLM/main/L0_MergeRequest_PR pipeline #16667 completed with status: 'FAILURE'

@rosenrodt
Copy link
Collaborator Author

rosenrodt commented Oct 22, 2025

/bot run --disable-fail-fast

Re-run due to timeout in [A10-PyTorch-1] tests

[2025-10-22T03:49:47.452Z] FAILED A10-PyTorch-1/disaggregated/test_disaggregated.py::test_disaggregated_mixed[TinyLlama-1.1B-Chat-v1.0] - subprocess.CalledProcessError: Command '['python3', '/home/jenkins/agent/workspace/LLM/main/L0_Test-x86_64-Single-GPU/llmVanilla/TensorRT-LLM/src/examples/disaggregated/clients/disagg_client.py', '-c', '/home/jenkins/agent/workspace/LLM/main/L0_Test-x86_64-Single-GPU/llmVanilla/TensorRT-LLM/src/tests/integration/defs/disaggregated/test_configs/disagg_config_mixed.yaml', '-p', '/home/jenkins/agent/workspace/LLM/main/L0_Test-x86_64-Single-GPU/llmVanilla/TensorRT-LLM/src/examples/disaggregated/clients/prompts.json', '--ignore-eos', '--server-start-timeout', '1200']' returned non-zero exit status 1.

[2025-10-22T03:49:47.452Z] FAILED A10-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/executor/test_rpc.py] - AssertionError: failure reported in unittests

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22130 [ run ] triggered by Bot. Commit: 883887e

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22130 [ run ] completed with state SUCCESS. Commit: 883887e
/LLM/main/L0_MergeRequest_PR pipeline #16688 completed with status: 'FAILURE'

@longlee0622
Copy link
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22157 [ run ] triggered by Bot. Commit: 883887e

@rosenrodt
Copy link
Collaborator Author

2 failures in [A10-PyTorch-1] tests in the most recent pipeline. But both tests pass locally (at least using B200s).

test_trtllm_serve_multimodal_example.py:

[2025-10-22T07:43:59.016Z] FAILED A10-PyTorch-1/test_e2e.py::test_trtllm_serve_multimodal_example - subprocess.CalledProcessError: Command '['python3', '-m', 'pytest', '/home/jenkins/agent/workspace/LLM/main/L0_Test-x86_64-Single-GPU/llmVanilla/TensorRT-LLM/src/tests/unittest/llmapi/apps/_test_trtllm_serve_multimodal_example.py']' returned non-zero exit status 1.
...
[2025-10-22T07:43:56.099Z] RuntimeError: [TensorRT-LLM][ERROR] Assertion failed: The number of context tokens (16399) exceeds the limit value (16384) (../tensorrt_llm/batch_manager/microBatchScheduler.cpp:225)
[2025-10-22T07:43:56.099Z] 1       0x7ff6add0f4ce tensorrt_llm::common::throwRuntimeError(char const*, int, char const*) + 97
[2025-10-22T07:43:56.099Z] 2       0x7ff673ae02ad /usr/local/lib/python3.12/dist-packages/tensorrt_llm/libs/libtensorrt_llm.so(+0x18c52ad) [0x7ff673ae02ad]
[2025-10-22T07:43:56.099Z] 3       0x7ff6bcbcc412 /usr/local/lib/python3.12/dist-packages/tensorrt_llm/bindings.cpython-312-x86_64-linux-gnu.so(+0x3ec412) [0x7ff6bcbcc412]
[2025-10-22T07:43:56.099Z] 4       0x7ff6bcc99341 /usr/local/lib/python3.12/dist-packages/tensorrt_llm/bindings.cpython-312-x86_64-linux-gnu.so(+0x4b9341) [0x7ff6bcc99341]
...
[2025-10-22T07:43:56.099Z] _ ERROR at setup of test_trtllm_serve_examples[bash-curl_chat_client_for_multimodal.sh] _

test_rpc.py

[2025-10-22T07:43:59.017Z] FAILED A10-PyTorch-1/test_unittests.py::test_unittests_v2[unittest/executor/test_rpc.py] - AssertionError: failure reported in unittests
...
[2025-10-22T07:43:56.111Z] unittest/A10-PyTorch-1/unittest/executor/test_rpc.py::TestRpcBasics::test_rpc_without_wait_response +++++++++++++++++++++++++++++++++++ Timeout ++++++++++++++++++++++++++++++++++++

But my local testing on B200 looks fine

$ pytest tests/integration/defs/test_e2e.py::test_trtllm_serve_multimodal_example -v -s
...
3 passed, 4 warnings in 89.77s (0:01:29)
PASSED

$ pytest tests/unittest/executor/test_rpc.py -v -s
...
tests/unittest/executor/test_rpc.py::TestRpcBasics::test_rpc_without_wait_response PASSED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22157 [ run ] completed with state SUCCESS. Commit: 883887e
/LLM/main/L0_MergeRequest_PR pipeline #16708 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@longlee0622 longlee0622 enabled auto-merge (squash) October 23, 2025 00:16
@longlee0622 longlee0622 merged commit 8a3b870 into NVIDIA:main Oct 23, 2025
5 checks passed
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull request Oct 24, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 1, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull request Nov 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants