Skip to content

Conversation

@Superjomn
Copy link
Collaborator

@Superjomn Superjomn commented Dec 5, 2025

Summary by CodeRabbit

Release Notes

  • New Features

    • Added automatic HMAC key generation and injection into RPC worker initialization parameters for distributed communication.
  • Tests

    • Added test coverage for HMAC key generation and validation in RPC proxy initialization.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Signed-off-by: Superjomn <[email protected]>
@Superjomn Superjomn requested a review from a team as a code owner December 5, 2025 13:09
@Superjomn Superjomn requested a review from hchings December 5, 2025 13:09
@Superjomn
Copy link
Collaborator Author

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27125 [ run ] triggered by Bot. Commit: ceebe9f

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

📝 Walkthrough

Walkthrough

The changes introduce HMAC-based authentication to the RPC communication infrastructure by generating a 32-byte HMAC key during executor initialization and threading it through RPC client/server constructors and worker initialization payloads to enable encrypted RPC channels.

Changes

Cohort / File(s) Summary
HMAC Key Generation & Executor Initialization
tensorrt_llm/executor/rpc_proxy_mixin.py, tensorrt_llm/executor/ray_executor.py
Generates 32-byte HMAC key via os.urandom(32), passes to RPCClient constructor, and injects into worker_kwargs. Adds import os.
Worker Initialization with HMAC
tensorrt_llm/executor/ray_gpu_worker.py, tensorrt_llm/executor/rpc_proxy.py
Adds hmac_key: Optional[bytes] parameter to worker constructors; injects key into worker initialization payloads and forwards to init_rpc_worker.
RPC Client/Server Configuration
tensorrt_llm/executor/rpc_client.py, tensorrt_llm/executor/rpc_server.py, tensorrt_llm/executor/rpc_worker.py
Updates RPC client/server instantiation to conditionally enable HMAC encryption based on presence of hmac_key parameter.
RPC Worker Mixin
tensorrt_llm/executor/rpc_worker_mixin.py
Extends init_rpc_worker signature with hmac_key: Optional[bytes] parameter; stores key and passes to RPCServer during RPC server creation.
Test Coverage
tests/unittest/executor/test_rpc_proxy.py
Adds test_hmac_key_generation to verify HMAC key generation, presence in worker_kwargs, object identity, and successful RPC generation with auto-generated key.

Sequence Diagram(s)

sequenceDiagram
    participant Executor as Executor/Mixin
    participant Worker as RayGPUWorker
    participant RPC as RPC Layer
    participant Server as RPCServer

    Executor->>Executor: Generate hmac_key = os.urandom(32)
    Executor->>Executor: Store self.hmac_key
    Executor->>Executor: Inject hmac_key into worker_kwargs
    
    Executor->>Worker: Create worker with hmac_key in kwargs
    Worker->>Worker: Store hmac_key parameter
    Worker->>RPC: Call init_rpc_worker(rank, rpc_addr, hmac_key)
    
    RPC->>RPC: Store hmac_key
    RPC->>Server: Create RPCServer with hmac_key
    Server->>Server: use_hmac_encryption = (hmac_key is not None)
    Server->>Server: Setup encrypted channel if key present
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Extra attention needed:
    • tensorrt_llm/executor/rpc_client.py and tensorrt_llm/executor/rpc_server.py: Verify the conditional HMAC encryption logic (use_hmac_encryption=hmac_key) is syntactically correct and handles None values appropriately. Summary mentions potential syntax errors with stray tokens.
    • tensorrt_llm/executor/rpc_worker_mixin.py: Confirm the parameter threading from init_rpc_worker to RPCServer is complete and properly typed throughout the call stack.
    • tests/unittest/executor/test_rpc_proxy.py: Verify the new test adequately covers both key generation and RPC functionality with the injected key.

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is incomplete - it only contains the template structure without any actual implementation details, rationale, test coverage information, or checklist verification. Fill in the Description section explaining what HMAC changes are being made and why, document relevant tests in Test Coverage, and verify all PR checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title '[None][fix] enable hmac in RPC' clearly summarizes the main change: enabling HMAC in RPC communication across multiple executor files.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
tensorrt_llm/executor/rpc/rpc_client.py (1)

85-96: Add type annotation for hmac_key parameter.

The hmac_key parameter on line 87 lacks a type annotation, which is inconsistent with other parameters in the constructor. This aligns with the project's practice of using type hints for interface documentation.

     def __init__(self,
                  address: str,
-                 hmac_key=None,
+                 hmac_key: Optional[bytes] = None,
                  timeout: Optional[float] = None,
                  num_workers: int = 4):
tensorrt_llm/executor/rpc_worker_mixin.py (1)

1-10: Missing NVIDIA copyright header.

Per coding guidelines, all TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top.

Add the copyright header at the top of the file:

+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-License-Identifier: Apache-2.0
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
 import asyncio
tests/unittest/executor/test_rpc_proxy.py (1)

131-133: Remove extraneous f-string prefix.

The f-string on line 132 has no placeholders, making the f prefix unnecessary.

             logger_debug(
-                f"[Test] HMAC key test passed: RPC communication successful",
+                "[Test] HMAC key test passed: RPC communication successful",
                 color="green")
tensorrt_llm/executor/rpc_proxy_mixin.py (1)

1-16: Missing NVIDIA copyright header.

Per coding guidelines, all TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 68253d9 and ceebe9f.

📒 Files selected for processing (9)
  • tensorrt_llm/executor/ray_executor.py (1 hunks)
  • tensorrt_llm/executor/ray_gpu_worker.py (2 hunks)
  • tensorrt_llm/executor/rpc/rpc_client.py (1 hunks)
  • tensorrt_llm/executor/rpc/rpc_server.py (1 hunks)
  • tensorrt_llm/executor/rpc_proxy.py (1 hunks)
  • tensorrt_llm/executor/rpc_proxy_mixin.py (2 hunks)
  • tensorrt_llm/executor/rpc_worker.py (1 hunks)
  • tensorrt_llm/executor/rpc_worker_mixin.py (2 hunks)
  • tests/unittest/executor/test_rpc_proxy.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: The code developed for TensorRT-LLM should conform to Python 3.8+
Indent Python code with 4 spaces; do not use tabs
Always maintain the namespace when importing in Python, even if only one class or function from a module is used (e.g., use from package.subpackage import foo and then foo.SomeClass() instead of from package.subpackage.foo import SomeClass)
Python filenames should use snake_case (e.g., some_file.py)
Python class names should use PascalCase (e.g., class SomeClass)
Python function and method names should use snake_case (e.g., def my_awesome_function():)
Python local variable names should use snake_case, with prefix k for variable names that start with a number (e.g., k_99th_percentile = ...)
Python global variables should use upper snake_case with prefix G (e.g., G_MY_GLOBAL = ...)
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...)
Avoid shadowing variables declared in an outer scope in Python
Initialize all externally visible members of a Python class in the constructor
For Python interfaces that may be used outside a file, prefer docstrings over comments
Python comments should be reserved for code within a function, or interfaces that are local to a file
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx
Python attributes and variables can be documented inline with type and description (e.g., self.x = 5 followed by """<type>: Description of 'x'""" )
Avoid using reflection in Python when functionality can be easily achieved without reflection
When using try-except blocks in Python, limit the except clause to the smallest set of specific errors possible instead of catching all exceptions
When using try-except blocks in Python to handle multiple possible variable types (duck-typing), keep the body of the try as small as possible and use the else block to implement the logic

Files:

  • tests/unittest/executor/test_rpc_proxy.py
  • tensorrt_llm/executor/rpc_proxy_mixin.py
  • tensorrt_llm/executor/rpc_worker.py
  • tensorrt_llm/executor/ray_executor.py
  • tensorrt_llm/executor/rpc_proxy.py
  • tensorrt_llm/executor/rpc_worker_mixin.py
  • tensorrt_llm/executor/rpc/rpc_client.py
  • tensorrt_llm/executor/rpc/rpc_server.py
  • tensorrt_llm/executor/ray_gpu_worker.py
**/*.{cpp,h,cu,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

All TensorRT-LLM Open Source Software code files should contain an NVIDIA copyright header that includes the current year at the top

Files:

  • tests/unittest/executor/test_rpc_proxy.py
  • tensorrt_llm/executor/rpc_proxy_mixin.py
  • tensorrt_llm/executor/rpc_worker.py
  • tensorrt_llm/executor/ray_executor.py
  • tensorrt_llm/executor/rpc_proxy.py
  • tensorrt_llm/executor/rpc_worker_mixin.py
  • tensorrt_llm/executor/rpc/rpc_client.py
  • tensorrt_llm/executor/rpc/rpc_server.py
  • tensorrt_llm/executor/ray_gpu_worker.py
🧬 Code graph analysis (2)
tests/unittest/executor/test_rpc_proxy.py (2)
tests/unittest/executor/test_ipc.py (1)
  • test_hmac_key_generation (262-278)
tensorrt_llm/llmapi/utils.py (1)
  • logger_debug (106-120)
tensorrt_llm/executor/ray_gpu_worker.py (1)
tensorrt_llm/executor/rpc_worker_mixin.py (1)
  • init_rpc_worker (28-39)
🪛 Ruff (0.14.7)
tests/unittest/executor/test_rpc_proxy.py

132-132: f-string without any placeholders

Remove extraneous f prefix

(F541)

tensorrt_llm/executor/rpc_worker_mixin.py

30-30: Avoid specifying long messages outside the exception class

(TRY003)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (13)
tensorrt_llm/executor/rpc/rpc_server.py (1)

108-114: LGTM!

The conditional enabling of HMAC encryption based on self._hmac_key presence is correct. The key is properly passed in the address tuple and the use_hmac_encryption flag appropriately reflects whether encryption should be enabled.

tensorrt_llm/executor/rpc_worker.py (1)

158-161: LGTM!

The HMAC key is correctly extracted from kwargs and passed to RPCServer. Using kwargs.get("hmac_key") provides a safe default of None when the key is not present, maintaining backward compatibility.

tensorrt_llm/executor/ray_executor.py (1)

84-88: LGTM!

The HMAC key injection follows the same pattern as rpc_proxy.py, ensuring consistency across executor implementations. The key is properly injected after init_rpc_executor() generates it and before workers are created with create_workers().

tensorrt_llm/executor/rpc/rpc_client.py (1)

108-114: LGTM!

The conditional enabling of HMAC encryption mirrors the server-side implementation in rpc_server.py. The key is correctly passed in the address tuple and use_hmac_encryption is set based on key presence.

tensorrt_llm/executor/rpc_proxy.py (1)

51-52: Verify HMAC key generation and injection placement in init_rpc_executor().

The pattern of generating an HMAC key and injecting it into worker_kwargs before launching workers is architecturally sound for authenticated RPC communication. However, the original review contains structural issues and lacks verification. Confirm that:

  1. init_rpc_executor() properly generates and stores self.hmac_key
  2. Lines 51-52 correctly assign self.hmac_key to worker_kwargs['hmac_key']
  3. The file includes the required NVIDIA copyright header with current year
  4. All naming conventions follow coding guidelines (snake_case for variables, PascalCase for classes)
tensorrt_llm/executor/rpc_worker_mixin.py (2)

28-39: LGTM! HMAC key parameter and storage look correct.

The hmac_key parameter is properly typed as Optional[bytes] with a default of None, maintaining backward compatibility. The key is correctly stored as an instance attribute for later use in start_rpc_server.


41-47: LGTM! HMAC key correctly passed to RPCServer.

The hmac_key is properly forwarded to the RPCServer constructor, which aligns with the RPCServer.__init__ signature shown in the relevant code snippets.

tests/unittest/executor/test_rpc_proxy.py (2)

98-109: Good test coverage for HMAC key generation and validation.

The test properly verifies that the HMAC key is automatically generated and has the expected 32-byte length, which matches the os.urandom(32) call in rpc_proxy_mixin.py.


111-118: Good identity verification for key consistency.

Using is to verify that both references point to the same object is the correct approach here, ensuring no accidental key duplication occurs during propagation.

tensorrt_llm/executor/rpc_proxy_mixin.py (1)

31-34: LGTM! Secure HMAC key generation and client initialization.

Using os.urandom(32) is the correct approach for generating cryptographically secure random bytes for HMAC keys. The 32-byte (256-bit) key size is appropriate for HMAC-SHA256. The key is properly passed to RPCClient for establishing authenticated RPC channels.

tensorrt_llm/executor/ray_gpu_worker.py (3)

159-172: LGTM! HMAC key parameter correctly added to RayGPUWorker.

The hmac_key parameter is properly typed as Optional[bytes] with a default of None, maintaining backward compatibility with existing code that doesn't use HMAC authentication.


192-196: LGTM! HMAC key correctly forwarded to init_rpc_worker.

The hmac_key is properly passed to init_rpc_worker, which aligns with the signature in rpc_worker_mixin.py.


40-74: Verify HMAC key propagation in RayWorkerWrapper.

The review comment references an AI summary indicating that RayWorkerWrapper.__init__ should accept and propagate hmac_key, but the provided code snippet at lines 40-74 shows no hmac_key parameter. If RayWorkerWrapper is used to instantiate workers that need HMAC-enabled RPC, this key should be passed through worker_kwargs to the underlying worker.

@Superjomn Superjomn enabled auto-merge (squash) December 5, 2025 13:48
@Superjomn
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27181 [ run ] triggered by Bot. Commit: ceebe9f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27125 [ run ] completed with state ABORTED. Commit: ceebe9f
LLM/main/L0_MergeRequest_PR #20695 (Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #27181 [ run ] completed with state SUCCESS. Commit: ceebe9f
/LLM/main/L0_MergeRequest_PR pipeline #20743 completed with status: 'SUCCESS'

@Superjomn Superjomn merged commit e4c7078 into NVIDIA:main Dec 7, 2025
8 of 9 checks passed
MinaHuai pushed a commit to davidmlw/TensorRT-LLM that referenced this pull request Dec 10, 2025
…VIDIA#8779)

The performance results of some kernels could be easily affected by the warm/cold L2 cache status. To achieve more precise profiling results, the L2 cache is cleared for every execution by the circular buffer method for better benchmarking during autotuning.

Signed-off-by: Yukun He <[email protected]>

[None][infra] Waive failed cases for main branch on 11/25 (NVIDIA#9429)

Signed-off-by: qqiao <[email protected]>

[NVIDIA#8391][chore] test_perf.py to lock clocks read from gpu_configs.yml instead of max freq (NVIDIA#9409)

Signed-off-by: Eran Geva <[email protected]>

[None][ci] Move more test stages to use OCI machines (NVIDIA#9395)

Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Matt Lefebvre <[email protected]>

[None][feat] Improve TRTLLM MoE in small hidden size throughput cases (NVIDIA#9377)

Signed-off-by: Anthony Chang <[email protected]>

[https://nvbugs/5537996][fix] Let KV cache manager block initialization be aware whether it is doing a dry run or not (NVIDIA#9093)

Before this commit, the kv cache manager does the same regardless, which causes a mis-calculation in free memory available to allocate for the KV cache manager, hence causing a crash.

This commit fixes this by letting KV cache manager initialization be aware whether it is doing the dry run or not. If it is a dry run, use the max_tokens setting that is already pre-calculated and filled into kv_cache_config.max_tokens.

Signed-off-by: eopXD <[email protected]>

[https://nvbugs/5667922][fix] Update long context evaluation config (NVIDIA#9426)

Signed-off-by: mni <[email protected]>

[None][fix] Mitigate test timeout issues (NVIDIA#9445)

Signed-off-by: Shixiaowei02 <[email protected]>

[None][chore] Fix trtllm-eval for PyTorchLLM (NVIDIA#9427)

Signed-off-by: Fanrong Li <[email protected]>

[None][feat] Add a parser to layer-wise benchmarks (NVIDIA#9440)

Signed-off-by: Tailing Yuan <[email protected]>

[None][feat] Support custom chat template for tool calling (NVIDIA#9297)

Signed-off-by: Pengyun Lin <[email protected]>

[TRTLLM-8160][feat] Add draft token tree runtime on CDL (NVIDIA#8586)

Signed-off-by: Yue Weng <[email protected]>

[None][ci] waive a test (NVIDIA#9458)

Signed-off-by: Yan Chunwei <[email protected]>

[https://nvbugs/5680905][fix] Relax the MMLU accuracy requirement for DS-v3.2 (NVIDIA#9439)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-8376][feat] top-p optimization (removes redundant softmax) (NVIDIA#9411)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9490][feat] use FlashInfer's top_k_sampling_from_probs (NVIDIA#9457)

Signed-off-by: ixlmar <[email protected]>

[https://nvbugs/5647400] [fix] Enlarged the AllReduce workspace size to 64MB. Added AllReduce strategy to AD config. (NVIDIA#9145)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-909][feat] Overlap context chunks in pipeline parallel mode (NVIDIA#9308)

Signed-off-by: Robin Kobus <[email protected]>

[None][chore] AutoDeploy add multi stream moe pass to default.yaml (NVIDIA#9430)

Signed-off-by: Suyog Gupta <[email protected]>

[https://nvbugs/5685143][fix] avoid cudaFree overlap with cuda graph (NVIDIA#9438)

Signed-off-by: Chuang Zhu <[email protected]>

[None][chore] Bump version to 1.2.0rc5 (NVIDIA#9455)

Signed-off-by: Yiqing Yan <[email protected]>

[TRTLLM-8936][test] Add disagg and wideep multi-node multi-gpu test cases (NVIDIA#9356)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][ci] move some slow test cases of DGX-B200 to post merge (NVIDIA#9467)

Signed-off-by: junq <[email protected]>

[TRTLLM-9293][feat] Enable partial weight loading to support streaming update weights (NVIDIA#9224)

Signed-off-by: shuyix <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9264][fix] Add accuracy/unit tests/doc for phi4mm (NVIDIA#9246)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5580099][fix] Cherry pick IMA issue fix from release/1.1 (NVIDIA#9032)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] Upgrade CuteDSL to 4.3.0 (NVIDIA#9444)

Signed-off-by: Enwei Zhu <[email protected]>

[None][feat] Support MLA chunked prefill for DeepSeek V3.2 model (NVIDIA#9376)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][feat] Add environment variable to force spec-dec number of accepted tokens (NVIDIA#9371)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Update allowed list 2025.11.25 (NVIDIA#9468)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Fail the pipeline when slurm ssh dropped (NVIDIA#9157)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Remove redundant copies in mamba layers (NVIDIA#9461)

Signed-off-by: Chenghao Zhang <[email protected]>
Co-authored-by: Suyog Gupta <[email protected]>

[None][feat] AutoDeploy: Add A_log fusion for Mamba layers (NVIDIA#9422)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][ci] Waive blackwell test on spec gate. (NVIDIA#9502)

Signed-off-by: Zheyu Fu <[email protected]>

[https://nvbugs/5608930][fix] Fix a typo (NVIDIA#9487)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9463][feat] Add revision option to trtllm commands (NVIDIA#9498)

Signed-off-by: Aurelien Chartier <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues (NVIDIA#9481)

Signed-off-by: junq <[email protected]>

[None][chore] update comments in llm_args.py (NVIDIA#9472)

Signed-off-by: junq <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5680310][fix] Fix ctx only timed out test (NVIDIA#9410)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5547414][fix] enable case after using local cache model (NVIDIA#9473)

Signed-off-by: Hui Gao <[email protected]>

[None][fix] Replace PYTORCH_CUDA_ALLOC_CONF with PYTORCH_ALLOC_CONF to fix deprecation warning (NVIDIA#9294)

Signed-off-by: Jiagan Cheng <[email protected]>

[https://nvbugs/5698581][fix] Init draft tokens for CUDA graph dummy request (NVIDIA#9505)

Signed-off-by: ziyixiong-nv <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/27 (NVIDIA#9507)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9513][docs] Qwen3 deployment guide (NVIDIA#9488)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][chore] revert batch_size=1 to prevent timeout and lower accuracy reference by 0.12% as a WAR (NVIDIA#9447)

Signed-off-by: Lizhi Zhou <[email protected]>
Co-authored-by: Shi Xiaowei <[email protected]>

[TRTLLM-9279][infra] Use flexcache for gh200 nodes since they locate in Austin (NVIDIA#9405)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[cherry-pick][https://nvbugs/5670793][fix] Solve trtllm-serve launch_disaggregated issue (NVIDIA#9346)

Signed-off-by: xxi <[email protected]>

[None][infra] Fix Slurm job script (NVIDIA#9508)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][fix] change allreduce workspace dtype to torch.int64 to avoid overflow (NVIDIA#9479)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][feat] add qwen3-next CI test of accuracy on BF16 and NVFP4 (NVIDIA#9330)

Signed-off-by: jiant <[email protected]>

[None][fix] fix TP support for DeepSeek-V3.2 on hopper (NVIDIA#9484)

Signed-off-by: Fanrong Li <[email protected]>

[TRTLLM-9389][chore] Refactor AlltoallMethodType. (NVIDIA#9388)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5674665][chore] Add test coverage for https://nvbugspro.nvidia.com/bug/5674665 (NVIDIA#9518)

Signed-off-by: eopXD <[email protected]>

[TRTLLM-7288][infra] Download merged waive list in slurm script (NVIDIA#8999)

Signed-off-by: Yiqing Yan <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[https://nvbugs/5687820][fix] Remove self.abort() in DetokenizedGenerationResult (NVIDIA#9449)

Signed-off-by: Enwei Zhu <[email protected]>

[NVIDIA#9150][feat] AutoDeploy Nemotron-Flash support (NVIDIA#9504)

Signed-off-by: Lucas Liebenwein <[email protected]>

[None] [chore] Update to cutlass 4.3 (NVIDIA#8637)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5637037][chore] Update waive lists. (NVIDIA#9386)

Signed-off-by: Bo Li <[email protected]>
Signed-off-by: Enwei Zhu <[email protected]>
Co-authored-by: Enwei Zhu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-8970][infra] Fix generate report when has isolation test result (NVIDIA#8861)

Signed-off-by: qqiao <[email protected]>
Signed-off-by: Emma Qiao <[email protected]>

[https://nvbugs/5685015][fix] Update invalid max_token test (NVIDIA#9435)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Fix on-disk cache and revise logger/statistics for AutoTuner. (NVIDIA#9211)

Signed-off-by: Yukun He <[email protected]>

[https://nvbugs/5689658][test] Fix gpu lock issue running on cluster (NVIDIA#9441)

Signed-off-by: yufeiwu <[email protected]>

[None][chore] add spec_decoding configs in perf benchmark scripts and fix typos (NVIDIA#9533)

Signed-off-by: Lanyu Liao <[email protected]>
Co-authored-by: Lanyu Liao <[email protected]>

[None][fix] Remove FP8 K/V buffer from TRTLLM sparse MLA attention kernel (NVIDIA#9529)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None] [chore] Enhancements and clean up to slurm scripts (NVIDIA#9493)

Signed-off-by: Kaiyu Xie <[email protected]>

[None][chore] Revert "[None][fix] change allreduce workspace dtype to torch.int64 t… (NVIDIA#9538)

Signed-off-by: Zhenhuan Chen <[email protected]>

[None][infra] Waive failed cases for main branch on 11/28 (NVIDIA#9539)

Signed-off-by: qqiao <[email protected]>

[None][fix] Pass checkpoint_format to create_input_processor (NVIDIA#9521)

Signed-off-by: Robin Kobus <[email protected]>

[TRTLLM-9541][infra] Use artifactory mirror for download.pytorch.org (NVIDIA#9477)

Signed-off-by: ZhanruiSunCh <[email protected]>
Signed-off-by: Zhanrui Sun <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[TRTLLM-9488][feat] add 'disable_flashinfer_sampling' config option (NVIDIA#9454)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Waive failed case in pre-merge on 11/28 (NVIDIA#9537)

Signed-off-by: Wangshanshan <[email protected]>

[None][perf] Helix: improve all-to-all perf for large CP size (NVIDIA#9494)

Signed-off-by: Matthias Jouanneaux <[email protected]>
Signed-off-by: Zheyu Fu <[email protected]>
Co-authored-by: Zheyu Fu <[email protected]>

[None][feat] support for more accurate AR calculation (NVIDIA#9323)

Signed-off-by: binghanc <[email protected]>

[TRTLLM-9488][fix] llmapi references (NVIDIA#9547)

Signed-off-by: ixlmar <[email protected]>

[NVIDIA#8948][feat] Support custom sharding config (NVIDIA#9143)

Signed-off-by: greg-kwasniewski1 <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chore] Weekly mass integration of release/1.1 -- rebase (NVIDIA#9522)

Signed-off-by: yunruis <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Wangshanshan <[email protected]>
Signed-off-by: qgai <[email protected]>
Signed-off-by: Balaram Buddharaju <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Junyi Xu <[email protected]>
Signed-off-by: Simeng Liu <[email protected]>
Signed-off-by: nv-guomingz <[email protected]>
Signed-off-by: Jin Li <[email protected]>
Signed-off-by: Ivy Zhang <[email protected]>
Signed-off-by: Vincent Zhang <[email protected]>
Signed-off-by: peaceh <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Michal Guzek <[email protected]>
Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>
Signed-off-by: leslie-fang25 <[email protected]>
Signed-off-by: Shunkang <[email protected]>
Signed-off-by: junq <[email protected]>
Co-authored-by: yunruis <[email protected]>
Co-authored-by: sunnyqgg <[email protected]>
Co-authored-by: brb-nv <[email protected]>
Co-authored-by: Yan Chunwei <[email protected]>
Co-authored-by: JunyiXu-nv <[email protected]>
Co-authored-by: Simeng Liu <[email protected]>
Co-authored-by: Guoming Zhang <[email protected]>
Co-authored-by: Jin Li <[email protected]>
Co-authored-by: Ivy Zhang <[email protected]>
Co-authored-by: Vincent Zhang <[email protected]>
Co-authored-by: peaceh-nv <[email protected]>
Co-authored-by: Michal Guzek <[email protected]>
Co-authored-by: Chang Liu <[email protected]>
Co-authored-by: Leslie Fang <[email protected]>
Co-authored-by: Shunkangz <[email protected]>
Co-authored-by: Shunkang <[email protected]>
Co-authored-by: QI JUN <[email protected]>

[TRTLLM-5971][feat] Integrate helix parallelism (NVIDIA#9342)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][infra] - Request idle time exemption for OCI jobs (NVIDIA#9528)

Signed-off-by: Yanchao Lu <[email protected]>

[None][infra] Wiave failed tests for main branch on 11/30 (NVIDIA#9555)

Signed-off-by: qqiao <[email protected]>

[None][fix] Fix port conflict in disagg tests (NVIDIA#9474)

Signed-off-by: Junyi Xu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9558)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Split H100_PCIe-PyTorch-Post-Merge test stage (NVIDIA#9559)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-8958][feat] and [TRTLLM-8960]: create ConfigurableMoE and support TRTLLMGenFusedMoE as backend (NVIDIA#9486)

[None] [feat] Optimize the algorithm part of RocketKV (NVIDIA#9333)

Signed-off-by: yuhangh <[email protected]>

[https://nvbugs/5690172][fix] Fix Qwen3-235B ATP accuracy issue with PDL (NVIDIA#9530)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-6222][feat] Extend cute_dsl_nvfp4_gemm to sm103. (NVIDIA#9543)

Signed-off-by: Mindy Li <[email protected]>

[None][fix] Correct virtual memory allocation alignment (NVIDIA#9491)

Signed-off-by: Yuan Tong <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5684703][fix] Unwaive disagg guided decoding test (NVIDIA#9466)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5503479][fix] Temporarily lower reference accuracy to stabilize CI (NVIDIA#9398)

Signed-off-by: Pengbo Wang <[email protected]>

[None][chore] remove qwen3-next accuracy tests (NVIDIA#9534)

Signed-off-by: jiant <[email protected]>

[None][doc] fix mtp.py typo (NVIDIA#9307)

Signed-off-by: liugaoji <[email protected]>

[None][feat] add chat template kwargs support to longbench-v2 (NVIDIA#9544)

Signed-off-by: Fanrong Li <[email protected]>

[NVIDIA#9496][fix] AutoDeploy: remove auto-tuner from nvfp4_gemm forward (NVIDIA#9497)

Signed-off-by: Neta Zmora <[email protected]>

[None][fix] Replace hash method with unique_id for cutedsl MoE runners. (NVIDIA#9569)

Signed-off-by: Yukun He <[email protected]>

[None][chore] refactor disaggregated scripts to use named arguments (NVIDIA#9581)

Signed-off-by: Zhenhuan Chen <[email protected]>

[TRTLLM-6222][feat] Several perf opt for cuteDSL nvf4 gemm (NVIDIA#9428)

Signed-off-by: Yuhan Li <[email protected]>

[None][chore] reduce the layers of the `devel` docker image (NVIDIA#9077)

Signed-off-by: Martin Marciniszyn Mehringer <[email protected]>

[https://nvbugs/5651854][infra] Enable perf metrics during accuracy testing (NVIDIA#9140)

[None][fix] Skip Allreduce init for Attention DP (NVIDIA#9542)

Signed-off-by: Enwei Zhu <[email protected]>

[None][test] [None][test] Waive main branch test failures 12/1 (NVIDIA#9566)

Signed-off-by: Yanchao Lu <[email protected]>

[None][ci] Minor change for Slurm scripts (NVIDIA#9561)

Signed-off-by: Yanchao Lu <[email protected]>

[TRTLLM-6768][infra] Fix params for not updating github status (NVIDIA#6747)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Update the pytest options after MI (NVIDIA#9579)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-6756][feat] Add Beam Search to TorchSampler (NVIDIA#8509)

Signed-off-by: Stefan Niebler <[email protected]>

[None][chore] Defer exposing context parallel configs (NVIDIA#9552)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTC-1943][feat] Env vars override support in LLM API (NVIDIA#9104)

Signed-off-by: Venky Ganesh <[email protected]>

[None][feat] AutoDeploy: Use the router gemm op for nemotron MOE (NVIDIA#9500)

Signed-off-by: Chenghao Zhang <[email protected]>

[NVIDIA#9198][feat] Refactor dist ops in AutoDeploy (NVIDIA#9301)

Signed-off-by: Eran Geva <[email protected]>

[None][fix] Prevent YAML partial kv_cache_config from incorrectly overriding the complete kv_cache_config (NVIDIA#9262)

Signed-off-by: Yuening Li <[email protected]>

[TRTLLM-9085][doc] fix math formula rendering issues in github (NVIDIA#9605)

Signed-off-by: junq <[email protected]>

[None][feat] Unify nvfp4 gemm backend (NVIDIA#8963)

Signed-off-by: Shijie Wang <[email protected]>
Signed-off-by: Yukun He <[email protected]>
Signed-off-by: Shijie <[email protected]>
Co-authored-by: Yukun He <[email protected]>

[None][feat] Add support for KVCache reuse for DSv32 (NVIDIA#9383)

Signed-off-by: Iman Tabrizian <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][chroe] Polish qwen3-next modeling code. (NVIDIA#8902)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5703953][fix] Use random port for disagg tests (NVIDIA#9582)

Signed-off-by: Junyi Xu <[email protected]>

[None][fix] Waive gb200 (NVIDIA#9580)

Signed-off-by: Xin He (SW-GPU) <[email protected]>

[FMDL-1328][feat] Add support for nano-v3 and super-v3 with pytorch backend (NVIDIA#9261)

Signed-off-by: Wanli Jiang <[email protected]>

[https://nvbugs/5582091][test] increase warmup times in testing for multi-gpu cases (NVIDIA#9578)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9588)

Signed-off-by: xinhe-nv <[email protected]>

[https://nvbugs/5702793][fix] Fix uncontiguous tensor view (NVIDIA#9576)

Signed-off-by: shuyix <[email protected]>

[None][infra] Waive failed cases for main branch (NVIDIA#9615)

Signed-off-by: qqiao <[email protected]>

[TRTLLM-9488][feat] use FlashInfer.sampling by default (NVIDIA#9545)

Signed-off-by: ixlmar <[email protected]>

[None][infra] Update allowlist 2025/12/01 (NVIDIA#9616)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][infra] Remove an invalid test name in waives.txt (NVIDIA#9620)

Signed-off-by: qqiao <[email protected]>

Lock the gpu clocks in L0 perf tests (NVIDIA#9585)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9466][test] Evaluate helix parallelism with DSV3 Lite (NVIDIA#9597)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][fix] Extract GPU count from single-node stage names (NVIDIA#9599)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[https://nvbugs/5667774][fix] Refine Piecewise Cuda Graph Condition for DP (NVIDIA#9393)

Signed-off-by: Jin Li <[email protected]>

[TRTLLM-9144][fix] enhance RPC robustness (NVIDIA#8711)

Signed-off-by: Superjomn <[email protected]>
Signed-off-by: Erin Ho <[email protected]>
Signed-off-by: Yan Chunwei <[email protected]>
Co-authored-by: Erin Ho <[email protected]>

[https://nvbugs/5627710][fix] Fix synchronization bugs in KvCacheTransferManager that can cause corrupted blocks (NVIDIA#9056)

Signed-off-by: thorjohnsen <[email protected]>
Signed-off-by: Thor Johnsen <[email protected]>
Co-authored-by: Iman Tabrizian <[email protected]>
Co-authored-by: Robin Kobus <[email protected]>

[TRTLLM-8980][test] Clean up spec dec tests in test_llm_api_pytorch (NVIDIA#8889)

Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[NVIDIA#9150][feat] Add code for nano v3 to custom implementation in AD (NVIDIA#9465)

* Why?

We would like to show an alternative to monkey-patching in AutoDeploy.

* What?

This commit builds on the existing custom model implementation for
NemotronH and adds the bits relevant for MoE layers.

Part of NVIDIA#9150.

Signed-off-by: William Zhang <[email protected]>

[NVIDIA#9150][feat] AutoDeploy: reviewer comments for NVIDIA#9150 (NVIDIA#9527)

Signed-off-by: Lucas Liebenwein <[email protected]>

[https://nvbugs/5651854][fix] Fix dist-serving perf by clearing CPU affinity (NVIDIA#9549)

Signed-off-by: Shixiaowei02 <[email protected]>

[NVIDIA#9550][feat] AutoDeploy: Add NVFP4 Cutlass MoE kernels  (NVIDIA#9551)

Signed-off-by: Neta Zmora <[email protected]>

[https://nvbugs/5688388][fix] fix: Reducing num request in disagg test to speed up (NVIDIA#9598)

Signed-off-by: Patrice Castonguay <[email protected]>

[TRTLLM-8946][feat] Improved heuristics to detect shardable regions (NVIDIA#9200)

Signed-off-by: Lucas Liebenwein <[email protected]>
Signed-off-by: greg-kwasniewski1 <[email protected]>
Co-authored-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9632][feat] Support EXTRA_WHEEL_BUILD_ARGS during wheel build (NVIDIA#9633)

Signed-off-by: Yu Chi Li <[email protected]>

[None][chore] Waive test failing on pre-merge (NVIDIA#9638)

Signed-off-by: Balaram Buddharaju <[email protected]>

[None][chore] Remove traceback dump for multimodal input processor (NVIDIA#9634)

Signed-off-by: Chang Liu (Enterprise Products) <[email protected]>

[None][chore] Fix trtllm-eval and move GroupedGemmInputsHelper (NVIDIA#9612)

Signed-off-by: Enwei Zhu <[email protected]>

[https://nvbugs/5698434][fix] Use separate weight mapper for draft (NVIDIA#9607)

Signed-off-by: Anurag Mukkara <[email protected]>

[TRTLLM-7101][infra] Reuse passed tests (NVIDIA#6894)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][test] Remove duplicate test cases (NVIDIA#9623)

Signed-off-by: yufeiwu <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][feat] Add RocketKV usage doc and e2e accuracy test on LongBenchV2 (NVIDIA#9572)

Signed-off-by: yuhangh <[email protected]>

[TRTLLM-9242][doc] Add examples showcasing openai compatible APIs (NVIDIA#9520)

Signed-off-by: Junyi Xu <[email protected]>

[None][chore] AutoDeploy update cuda stream manager for multi-device (NVIDIA#9575)

Signed-off-by: Suyog Gupta <[email protected]>

[TRTLLM-9391][chore] Automatically estimate required workspace. (NVIDIA#9535)

Signed-off-by: Bo Li <[email protected]>

[https://nvbugs/5708475][fix] Fix e2e eval accuracy for helix parallelism (NVIDIA#9647)

Signed-off-by: Balaram Buddharaju <[email protected]>

[https://nvbugs/5561153][test] Fix log error for perf test (NVIDIA#9622)

Signed-off-by: FredricZ-2007 <[email protected]>

[TRTLLM-8241][feat] Aliasing to comply to LlmArgs (NVIDIA#9586)

Signed-off-by: Pengyun Lin <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9593)

Signed-off-by: Jie Li <[email protected]>
Co-authored-by: Jie Li <[email protected]>

[TRTLLM-6842][feat] Support Response API for general purpose (NVIDIA#9392)

Signed-off-by: Junyi Xu <[email protected]>

[None][test] Update Qwen3-next accuracy testing by setting the cuda … (NVIDIA#9613)

Signed-off-by: nv-guomingz <[email protected]>

[None][feat] update trtllm-gen nvfp4 kernels with better performance (NVIDIA#9510)

Signed-off-by: Perkz Zheng <[email protected]>

[None][doc] Replace the tensorrt icon with torch icon on overview.md (NVIDIA#9644)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5705197][chore] Unwaive timeout disagg tests (NVIDIA#9637)

Signed-off-by: Patrice Castonguay <[email protected]>

[https://nvbugs/5552132][fix] Enable LoRa for GPT OSS Torch (NVIDIA#8253)

Signed-off-by: Michal Guzek <[email protected]>

[None][fix] Fix wide ep MoE error (NVIDIA#9642)

Signed-off-by: Iman Tabrizian <[email protected]>

[https://nvbugs/5702795][fix] Remove the warning message for aten.log. (NVIDIA#9665)

Signed-off-by: nv-guomingz <[email protected]>

[https://nvbugs/5693853][fix] Fix error handling when querying machin… (NVIDIA#9483)

Signed-off-by: Gal Hubara Agam <[email protected]>

[OMNIML-2932] [feat] nvfp4 awq support (NVIDIA#8698)

Signed-off-by: weimingc <[email protected]>

[NVIDIA#9643][fix] AutoDeploy: fix nano sharding config (NVIDIA#9668)

Signed-off-by: Lucas Liebenwein <[email protected]>

[NVIDIA#9147][feat] AutoDeploy: Draft Target Speculative Decoding (NVIDIA#9275)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][feat] Update Qwen3CodeToolParser to align tool-calling parameters (NVIDIA#9540)

Signed-off-by: Wanli Jiang <[email protected]>

[TRTLLM-7181][infra] Generate test results when pytest timeout happens (NVIDIA#9396)

Signed-off-by: Yiqing Yan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9522][fix] restore `trtllm-serve mm_embedding_serve` (NVIDIA#9669)

[TRTLLM-5093][infra] Write env variables to a file in the interactive debug session (NVIDIA#6792)

Signed-off-by: Yiqing Yan <[email protected]>

[None][fix] fix error when processing batches containing both text and mm data (NVIDIA#8381)

Signed-off-by: Nekofish-L <[email protected]>

[TRTLLM-7073][feat] Support torch compile for PP for Llama and DeepSeekV3 (NVIDIA#7838)

Signed-off-by: Jin Li <[email protected]>

[None][feat] Add weights initialization and context phase parser to layer-wise benchmarks (NVIDIA#9667)

Signed-off-by: Tailing Yuan <[email protected]>

[TRTLLM-8274][feat] Check if executor is shutdown in /health entrypoint (NVIDIA#9057)

Signed-off-by: Junyi Xu <[email protected]>

[NVIDIA#8733][feat] Add Llama4 MoE handling to AutoDeploy (NVIDIA#9556)

Signed-off-by: Tal Cherckez <[email protected]>
Signed-off-by: tcherckez-nvidia <[email protected]>
Co-authored-by: Neta Zmora <[email protected]>

[None][ci] unwaive tests (NVIDIA#9651)

Signed-off-by: Yan Chunwei <[email protected]>

[None][feat] Add NIXL-LIBFABRIC support (NVIDIA#9225)

Signed-off-by: Yoray Zack <[email protected]>
Signed-off-by: zackyoray <[email protected]>

[None][test] rename wide ep and disagg metric name in perf test (NVIDIA#9704)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>

[https://nvbugs/5467531][fix] Unwaive fused_moe all to all test with … (NVIDIA#9617)

Signed-off-by: Jin Li <[email protected]>

[None][fix] Recover TRTLLM MoE Perf for DEP (NVIDIA#9562)

Signed-off-by: Anthony Chang <[email protected]>

[None][chore] Add failed cases into waives.txt (NVIDIA#9662)

Signed-off-by: Xin He (SW-GPU) <[email protected]>
Signed-off-by: xinhe-nv <[email protected]>
Signed-off-by: Yanchao Lu <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>

[None][fix] Fix TLLM_SPEC_DECODE_FORCE_NUM_ACCEPTED_TOKENS for MTP/EAGLE (NVIDIA#9608)

Signed-off-by: Aurelien Chartier <[email protected]>

[None][infra] Add container notices and documentation (NVIDIA#9185)

Signed-off-by: Parker Drake <[email protected]>

[TRTLLM-5312][infra] Add triton trigger rules (NVIDIA#6440)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Add feature docs for helix parallelism (NVIDIA#9684)

Signed-off-by: Balaram Buddharaju <[email protected]>

[TRTLLM-9579][infra] Set mergeWaiveList stage UNSTABLE when there is any issue (NVIDIA#9692)

Signed-off-by: Yiqing Yan <[email protected]>

[None][doc] Added line about partial reuse (NVIDIA#7846)

Signed-off-by: thorjohnsen <[email protected]>

[TRTLLM-8920][feat] decouple disagg service from fastapi (NVIDIA#8714)

Signed-off-by: Lizhi Zhou <[email protected]>

[https://nvbugs/5633340][fix] start disagg workers and servers on free ports (NVIDIA#9694)

Signed-off-by: Lizhi Zhou <[email protected]>

[TRTLLM-9562] [doc] Add Deployment Guide for Kimi K2 Thinking on TensorRT LLM - Blackwell (NVIDIA#9711)

Signed-off-by: Kaiyu Xie <[email protected]>

[NVIDIA#9602][feat] AutoDeploy: Support TRTLLM Sampler (NVIDIA#9641)

Signed-off-by: Govind Ramnarayan <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None] [tests] Unwaive EPLB tests (NVIDIA#9625)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5518713][test] Refactor core test lists by merging with llm_perf_cluster.yml (NVIDIA#9714)

Signed-off-by: yufeiwu <[email protected]>

[TRTLLM-7136][feat] Update load_weights method to include mapping parameter in checkpoint loaders (NVIDIA#9583)

Signed-off-by: Robin Kobus <[email protected]>

[None][refactor] Improve request processing function in sampler (NVIDIA#9671)

Signed-off-by: Robin Kobus <[email protected]>

[https://nvbugs/5670672][fix] Fix flaky KV connector tests (NVIDIA#9676)

Signed-off-by: jthomson04 <[email protected]>

[None][infra] Update allowed list 20251204 (NVIDIA#9718)

Signed-off-by: Yuanjing Xue <[email protected]>

[None][feat] AutoDeploy: Perf optimization for Attention and rmsnorm (NVIDIA#9719)

Signed-off-by: Chenghao Zhang <[email protected]>

[None][chore] Waive flakey disagg tests (NVIDIA#9749)

Signed-off-by: Mike Iovine <[email protected]>

[https://nvbugs/5601682][fix] Fix cacheTransceiver hang (NVIDIA#9311)

Signed-off-by: Iman Tabrizian <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9199][docs] KV Connector Docs (NVIDIA#9325)

Signed-off-by: jthomson04 <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9160][doc] add doc to llm_runtime.py (NVIDIA#9482)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][doc] VDR 1.0 trtllm-serve doc enhancement (NVIDIA#9443)

Signed-off-by: Pengyun Lin <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9086][doc] Clean up TODOs in documentation (NVIDIA#9292)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9157][doc] Guided decoding doc improvement (NVIDIA#9359)

Signed-off-by: Enwei Zhu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Updated Linux installation guide (NVIDIA#9485)

Signed-off-by: Yiqing Yan <[email protected]>
Co-authored-by: Yanchao Lu <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9075][doc] refine the slurm examples (NVIDIA#9548)

Signed-off-by: Yan Chunwei <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9093][doc] update hyper links in overview (NVIDIA#9568)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[TRTLLM-9092][doc] link to modelopt checkpoints in quick start guide (NVIDIA#9571)

Signed-off-by: junq <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>
Signed-off-by: Mike Iovine <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[None][fix] Fix triton moe load_weight (NVIDIA#9649)

Signed-off-by: shuyix <[email protected]>

[None][fix] fix a bug: deepseek_fp8_block_scales in TRTLLMGEN-MoE use 2D x_sf instead of 1D (NVIDIA#9658)

Signed-off-by: xxi <[email protected]>

[TRTLLM-9372][feat] Enable CuteDSL MoE with Large EP (NVIDIA#9592)

Signed-off-by: Enwei Zhu <[email protected]>

[TRTLLM-9522][chore] implement default `attach_multimodal_embeddings` (NVIDIA#9664)

Signed-off-by: ixlmar <[email protected]>

[TRTLLM-9660][feat] Convert cuteDSL GEMM to opt-in feature (NVIDIA#9682)

Signed-off-by: Jonas Li <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[None][fix] enable hmac in RPC (NVIDIA#9745)

Signed-off-by: Superjomn <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[https://nvbugs/5703953][fix] Preserving ip:port for trtllm-serve before initializing llm (NVIDIA#9646)

Signed-off-by: Junyi Xu <[email protected]>

[None][infra] Waive failed cases for main branch on 12/07 (NVIDIA#9769)

Signed-off-by: qqiao <[email protected]>

[None][fix] Several minor fixes to CI setting (NVIDIA#9765)

Signed-off-by: Yanchao Lu <[email protected]>

[OMNIML-3036][doc] Re-branding TensorRT-Model-Optimizer as Nvidia Model-Optimizer (NVIDIA#9679)

Signed-off-by: Chenjie Luo <[email protected]>

[None][feat] Enable NCCL_SYMMETRIC as default fallback for AllReduce (NVIDIA#9314)

Signed-off-by: Ludwig Schneider <[email protected]>

[TRTLLM-9000][feat] Add multi-node Perf Tests into CI (NVIDIA#8800)

Signed-off-by: Chenfei Zhang <[email protected]>

[None][test] add ntp tolerance in time metrics verification (NVIDIA#9741)

Signed-off-by: zhengd-nv <[email protected]>

[TRTLLM-9603][feat] Enable ConfigurableMoE test in the CI (NVIDIA#9645)

[https://nvbugs/5422621][test] Add GB 200 WIDEEP test case for RCCA 5422621 (NVIDIA#9506)

Signed-off-by: FredricZ-2007 <[email protected]>

[None][fix] Fix two tuning cache miss issues. (NVIDIA#9743)

Signed-off-by: Yukun He <[email protected]>

[None][infra] Check in most recent lock file from nightly pipeline

Signed-off-by: TensorRT LLM <[email protected]>

[TRTLLM-9706] [doc] Update wide EP documents (NVIDIA#9724)

Signed-off-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5666804][test] only adding sampler config for limited models (NVIDIA#9512)

Signed-off-by: Ruodi Lu <[email protected]>
Co-authored-by: Ruodi Lu <[email protected]>
Co-authored-by: yufeiwu-nv <[email protected]>
Co-authored-by: Larry Xu <[email protected]>

[None][infra] Waive failed cases for main on 12/08 (NVIDIA#9773)

Signed-off-by: qqiao <[email protected]>

[None][chore] Move the rocketkv e2e test to post-merge (NVIDIA#9768)

Signed-off-by: Fanrong Li <[email protected]>

[None][chore] Enable tvm_ffi for cute dsl nvfp4_gemm to reduce host overhead. (NVIDIA#9690)

Signed-off-by: Mindy Li <[email protected]>

[TRTLLM-9431][perf] Enable multistream for Linear Attention in Qwen3-… (NVIDIA#9696)

Signed-off-by: nv-guomingz <[email protected]>

[None][chore] Remove closed bugs (NVIDIA#9770)

Signed-off-by: xinhe-nv <[email protected]>

[None][infra] update mooncake in docker images (NVIDIA#9584)

Signed-off-by: zhengd-nv <[email protected]>
Signed-off-by: Zheng Duan <[email protected]>

[None][test] Add Kimi k2 WIDEEP perf and accuracy cases (NVIDIA#9686)

Signed-off-by: FredricZ-2007 <[email protected]>
Signed-off-by: Kaiyu Xie <[email protected]>
Co-authored-by: Kaiyu Xie <[email protected]>

[https://nvbugs/5527655][test] Add test case for RCCA 5527655 (NVIDIA#9511)

Signed-off-by: FredricZ-2007 <[email protected]>

[http://nvbugs/5649010][fix] fix test_auto_scaling.py::test_worker_restart timeout (NVIDIA#9775)

Signed-off-by: Lizhi Zhou <[email protected]>

[None][fix] Switch AutoDeploy's default allreduce strategy to NCCL (NVIDIA#9666)

Signed-off-by: Eran Geva <[email protected]>

[TRTLLM-9506][fix] Fix AR for DeepSeek-R1 2 model path (NVIDIA#9661)

Signed-off-by: qgai <[email protected]>

ray + updatew works

trtllm works in async env

trtllm works in sync and async env

ray + updatew works

rebase to the updated verl

server mode

still cherry pick

still cherry pick

still cherry pick

integrated http interface

hang at RyExecutor create workers ray.remote

clean code

use tensorrt_llm.rlhf_utils

Signed-off-by: Liwei Ma <[email protected]>

placement, asyncllm, and basic tests
Signed-off-by: Erin Ho <[email protected]>

connect sleep and wakeup; Add support to pass None to update_weights
Signed-off-by: Erin Ho <[email protected]>

Batching ctx for IFB scheduler

Signed-off-by: Yuan Tong <[email protected]>

accuracy WAR for TP>1: always use AllReduceStrategy.NCCL, refactored
Signed-off-by: Erin Ho <[email protected]>

fix e2e integration

Signed-off-by: Superjomn <[email protected]>

update asyncllm, other nits
Signed-off-by: Erin Ho <[email protected]>

fix init setup

Signed-off-by: Erin Ho <[email protected]>

Fix TRTLLMSampler logprobs perf

Signed-off-by: Yuan Tong <[email protected]>

fix and cleanup
Signed-off-by: Erin Ho <[email protected]>

fix server

Signed-off-by: Erin Ho <[email protected]>

Revert "Batching ctx for IFB scheduler"

This reverts commit b51aac0

Signed-off-by: Yuan Tong <[email protected]>

update & address comments

Signed-off-by: Erin Ho <[email protected]>
usberkeley pushed a commit to usberkeley/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 11, 2025
codego7250 pushed a commit to codego7250/TensorRT-LLM that referenced this pull request Dec 13, 2025
@Superjomn Superjomn deleted the enable-hmac-in-rpc branch December 13, 2025 13:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants