Skip to content

Conversation

@stnie
Copy link
Collaborator

@stnie stnie commented Dec 17, 2025

Summary by CodeRabbit

  • Documentation
    • Comprehensively updated sampling features documentation with clearer descriptions and reorganized content layout.
    • Documented new sampler_type parameter for explicit backend selection (TorchSampler or TRTLLMSampler).
    • Added detailed explanation of default backend auto-selection behavior for different use cases.
    • Updated all examples demonstrating per-prompt sampling configuration and multi-sampling usage.
    • Simplified beam search prerequisites and setup instructions.

✏️ Tip: You can customize this high-level summary in your review settings.

Description

Update docs for sampling

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Copy link
Member

@zhenhuaw-me zhenhuaw-me left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a few comments as a beginner of sampling :)

@stnie stnie force-pushed the docs/sampler/v1.1 branch from 6186cca to 8340ae6 Compare January 5, 2026 11:15
@stnie stnie marked this pull request as ready for review January 5, 2026 13:46
@stnie stnie requested a review from a team as a code owner January 5, 2026 13:46
@stnie stnie requested review from kaiyux and laikhtewari January 5, 2026 13:46
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 5, 2026

📝 Walkthrough

Walkthrough

Documentation rewrite reorganizes sampling features guide with combined feature table, updates terminology (PyTorch to Torch), replaces enable_trtllm_sampler guidance with explicit sampler_type parameter selection, adds default auto-selection behavior explanation, updates examples for per-prompt sampling parameters, and removes beam search prerequisites.

Changes

Cohort / File(s) Summary
Sampling Documentation Rewrite
docs/source/features/sampling.md
Reorganized content into combined feature table (Forward Pass, Sampling Strategies, Sampling Features); updated PyTorch terminology to Torch; replaced enable_trtllm_sampler guidance with explicit sampler_type parameter ("TorchSampler"/"TRTLLMSampler"); added default auto-selection behavior (TRTLLM for Beam Search, Torch otherwise); updated examples with per-prompt SamplingParams; removed speculative decoder unsupported note; removed disable_overlap_scheduler and CUDA Graphs prerequisites from beam search; added per-prompt configuration code blocks.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description is minimal and lacks required details. Only states 'Update docs for sampling' without explaining the issue, motivation, or specific changes made. Expand description to include: what was updated in the sampling documentation, why the changes were necessary, and details about the new sampler backends and API changes (sampler_type parameter, per-prompt configuration, etc.).
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically describes the main change: updating sampling documentation. It follows the required format with JIRA ticket [TRTLLM-8425], type [doc], and a concise summary.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Fix all issues with AI Agents 🤖
In @docs/source/features/sampling.md:
- Line 24: Replace the phrase "long term solution" with the hyphenated compound
adjective "long-term solution" in the sentence that reads "Torch Sampler
currently supports a superset of features of TRTLLM Sampler, and is intended as
the long term solution." Ensure the updated sentence uses "long-term solution"
so the compound adjective correctly modifies "solution."
- Line 3: Replace the incorrect capitalization "Pytorch" with the correct
product name "PyTorch" in the documentation; specifically update the string "The
Pytorch backend supports a wide variety of features, listed below:" to "The
PyTorch backend supports a wide variety of features, listed below:" and scan the
same file for any other occurrences of "Pytorch" to correct them for consistent
capitalization.
🧹 Nitpick comments (1)
docs/source/features/sampling.md (1)

1-41: Clarify the relationship between PyTorch backend and the two samplers.

The opening states the documentation covers the "Pytorch backend," but the "General usage" section introduces two sampling backends: Torch Sampler and TRTLLM Sampler. The distinction between the backend and the samplers could be clearer. Consider explicitly stating whether both samplers are part of the PyTorch backend or if they represent different architectural choices.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1733d5 and 8340ae6.

📒 Files selected for processing (1)
  • docs/source/features/sampling.md
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: venkywonka
Repo: NVIDIA/TensorRT-LLM PR: 6029
File: .github/pull_request_template.md:45-53
Timestamp: 2025-08-27T17:50:13.264Z
Learning: For PR templates in TensorRT-LLM, avoid suggesting changes that would increase developer overhead, such as converting plain bullets to mandatory checkboxes. The team prefers guidance-style bullets that don't require explicit interaction to reduce friction in the PR creation process.
Learnt from: tongyuantongyu
Repo: NVIDIA/TensorRT-LLM PR: 9655
File: tensorrt_llm/_torch/pyexecutor/sampler.py:3031-3031
Timestamp: 2025-12-12T03:27:18.859Z
Learning: In tensorrt_llm/_torch/pyexecutor/sampler.py, when reviewing code that iterates through requests, ensure it does not convert excessive data into Python lists. Instead, the code should use torch.gather or indexing to gather only the data that will be used in the for loop before converting to Python lists. This minimizes data movement and improves performance.
Learnt from: dcampora
Repo: NVIDIA/TensorRT-LLM PR: 6867
File: tensorrt_llm/_torch/pyexecutor/sampler.py:67-72
Timestamp: 2025-08-13T16:20:37.987Z
Learning: In TensorRT-LLM sampler code, performance is prioritized over additional validation checks. The beam_width helper method intentionally returns the first request's beam_width without validating consistency across all requests to avoid performance overhead from iterating through the entire batch.
📚 Learning: 2025-08-27T15:03:57.149Z
Learnt from: ixlmar
Repo: NVIDIA/TensorRT-LLM PR: 7294
File: tensorrt_llm/_torch/pyexecutor/sampler.py:368-392
Timestamp: 2025-08-27T15:03:57.149Z
Learning: In TensorRT-LLM's sampler.py, int32 usage for softmax_indices and related tensor indexing is intentional and should not be changed to int64. The torch.IntTensor type hint is correct for the sample() function's softmax_indices parameter.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-13T16:20:37.987Z
Learnt from: dcampora
Repo: NVIDIA/TensorRT-LLM PR: 6867
File: tensorrt_llm/_torch/pyexecutor/sampler.py:67-72
Timestamp: 2025-08-13T16:20:37.987Z
Learning: In TensorRT-LLM sampler code, performance is prioritized over additional validation checks. The beam_width helper method intentionally returns the first request's beam_width without validating consistency across all requests to avoid performance overhead from iterating through the entire batch.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
Repo: NVIDIA/TensorRT-LLM PR: 6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-14T15:38:01.771Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: cpp/tensorrt_llm/pybind/thop/bindings.cpp:55-57
Timestamp: 2025-08-14T15:38:01.771Z
Learning: In TensorRT-LLM Python bindings, tensor parameter collections like mla_tensor_params and spec_decoding_tensor_params are kept as required parameters without defaults to maintain API consistency, even when it might affect backward compatibility.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-15T06:46:53.813Z
Learnt from: eopXD
Repo: NVIDIA/TensorRT-LLM PR: 6767
File: cpp/tensorrt_llm/batch_manager/kvCacheManager.cpp:0-0
Timestamp: 2025-08-15T06:46:53.813Z
Learning: In the TensorRT-LLM KV cache manager, SWA (Sliding Window Attention) combined with beam search is currently in a broken/non-functional state and is planned for future rework. During preparatory refactoring phases, code related to SWA+beam search may intentionally remain in a non-working state until the broader rework is completed.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which can contain default `cuda_graph_config` values, so `llm_args` may already have this config before the extra options processing.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-26T09:37:10.463Z
Learnt from: jiaganc
Repo: NVIDIA/TensorRT-LLM PR: 7031
File: tensorrt_llm/bench/dataclasses/configuration.py:90-104
Timestamp: 2025-08-26T09:37:10.463Z
Learning: In TensorRT-LLM's bench configuration, the `get_pytorch_perf_config()` method returns `self.pytorch_config` which is a Dict[str, Any] that can contain default values including `cuda_graph_config`, making the fallback `llm_args["cuda_graph_config"]` safe to use.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-14T15:43:23.107Z
Learnt from: MatthiasKohl
Repo: NVIDIA/TensorRT-LLM PR: 6904
File: tensorrt_llm/_torch/attention_backend/trtllm.py:259-262
Timestamp: 2025-08-14T15:43:23.107Z
Learning: In TensorRT-LLM's attention backend, tensor parameters in the plan() method are assigned directly without validation (dtype, device, contiguity checks). This maintains consistency across all tensor inputs and follows the pattern of trusting callers to provide correctly formatted tensors.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-09-09T09:40:45.658Z
Learnt from: fredricz-20070104
Repo: NVIDIA/TensorRT-LLM PR: 7645
File: tests/integration/test_lists/qa/llm_function_core.txt:648-648
Timestamp: 2025-09-09T09:40:45.658Z
Learning: In TensorRT-LLM test lists, it's common and intentional for the same test to appear in multiple test list files when they serve different purposes (e.g., llm_function_core.txt for comprehensive core functionality testing and llm_function_core_sanity.txt for quick sanity checks). This duplication allows tests to be run in different testing contexts.

Applied to files:

  • docs/source/features/sampling.md
📚 Learning: 2025-08-01T15:14:45.673Z
Learnt from: yibinl-nvidia
Repo: NVIDIA/TensorRT-LLM PR: 6506
File: examples/models/core/mixtral/requirements.txt:3-3
Timestamp: 2025-08-01T15:14:45.673Z
Learning: In TensorRT-LLM, examples directory can have different dependency versions than the root requirements.txt file. Version conflicts between root and examples dependencies are acceptable because examples are designed to be standalone and self-contained.

Applied to files:

  • docs/source/features/sampling.md
🪛 LanguageTool
docs/source/features/sampling.md

[grammar] ~24-~24: Use a hyphen to join words.
Context: ...LLM Sampler, and is intended as the long term solution. One can specify which sam...

(QB_NEW_EN_HYPHEN)

🔇 Additional comments (1)
docs/source/features/sampling.md (1)

45-74: Well-structured examples addressing previous review feedback.

The examples effectively demonstrate both single and per-prompt sampling parameter usage, directly addressing previous reviewer comments about showing per-prompt configuration. The progression from basic usage to advanced per-prompt specification is clear and helpful.

# Sampling
The PyTorch backend supports most of the sampling features that are supported on the C++ backend, such as temperature, top-k and top-p sampling, beam search, stop words, bad words, penalty, context and generation logits, log probability and logits processors

The Pytorch backend supports a wide variety of features, listed below:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix inconsistent capitalization of PyTorch.

Line 3 uses "Pytorch" but the correct product name is "PyTorch" (capitalized). This should be consistent throughout the documentation.

🔎 Proposed fix
-The Pytorch backend supports a wide variety of features, listed below:
+The PyTorch backend supports a wide variety of features, listed below:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
The Pytorch backend supports a wide variety of features, listed below:
The PyTorch backend supports a wide variety of features, listed below:
🤖 Prompt for AI Agents
In @docs/source/features/sampling.md around line 3, Replace the incorrect
capitalization "Pytorch" with the correct product name "PyTorch" in the
documentation; specifically update the string "The Pytorch backend supports a
wide variety of features, listed below:" to "The PyTorch backend supports a wide
variety of features, listed below:" and scan the same file for any other
occurrences of "Pytorch" to correct them for consistent capitalization.

* TRTLLM Sampler

The following example prepares two identical prompts which will give different results due to the sampling parameters chosen:
Torch Sampler currently supports a superset of features of TRTLLM Sampler, and is intended as the long term solution. One can specify which sampler to use explicitly with:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Apply hyphenation to compound adjective.

The phrase "long term solution" should be hyphenated as "long-term solution" when used as a compound adjective modifying the noun.

🔎 Proposed fix
-Torch Sampler currently supports a superset of features of TRTLLM Sampler, and is intended as the long term solution. One can specify which sampler to use explicitly with:
+Torch Sampler currently supports a superset of features of TRTLLM Sampler, and is intended as the long-term solution. One can specify which sampler to use explicitly with:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Torch Sampler currently supports a superset of features of TRTLLM Sampler, and is intended as the long term solution. One can specify which sampler to use explicitly with:
Torch Sampler currently supports a superset of features of TRTLLM Sampler, and is intended as the long-term solution. One can specify which sampler to use explicitly with:
🧰 Tools
🪛 LanguageTool

[grammar] ~24-~24: Use a hyphen to join words.
Context: ...LLM Sampler, and is intended as the long term solution. One can specify which sam...

(QB_NEW_EN_HYPHEN)

🤖 Prompt for AI Agents
In @docs/source/features/sampling.md around line 24, Replace the phrase "long
term solution" with the hyphenated compound adjective "long-term solution" in
the sentence that reads "Torch Sampler currently supports a superset of features
of TRTLLM Sampler, and is intended as the long term solution." Ensure the
updated sentence uses "long-term solution" so the compound adjective correctly
modifies "solution."

@stnie
Copy link
Collaborator Author

stnie commented Jan 12, 2026

/bot run --stage-list "A10-Build_Docs"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31593 [ run ] triggered by Bot. Commit: 8340ae6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31593 [ run ] completed with state SUCCESS. Commit: 8340ae6
/LLM/main/L0_MergeRequest_PR pipeline #24432 (Partly Tested) completed with status: 'SUCCESS'

Copy link
Collaborator

@nv-guomingz nv-guomingz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@stnie
Copy link
Collaborator Author

stnie commented Jan 13, 2026

/bot skip --comment "doc-only change"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31767 [ skip ] triggered by Bot. Commit: 8340ae6

@tensorrt-cicd
Copy link
Collaborator

PR_Github #31767 [ skip ] completed with state SUCCESS. Commit: 8340ae6
Skipping testing for commit 8340ae6

stnie added 3 commits January 14, 2026 18:23
Updated the sampling documentation to clearly outline the two available backends: Torch Sampler and TRTLLM Sampler. Added details on default behavior and usage examples for better clarity.

Signed-off-by: Stefan Niebler <[email protected]>
Signed-off-by: Stefan Niebler <[email protected]>
@stnie stnie force-pushed the docs/sampler/v1.1 branch from cd5fbf3 to d8bd34f Compare January 14, 2026 17:23
@stnie
Copy link
Collaborator Author

stnie commented Jan 14, 2026

/bot run --stage-list "A10-Build_Docs"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32004 [ run ] triggered by Bot. Commit: d8bd34f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #32004 [ run ] completed with state SUCCESS. Commit: d8bd34f
/LLM/main/L0_MergeRequest_PR pipeline #24796 (Partly Tested) completed with status: 'SUCCESS'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants