Skip to content

Conversation

@MrGeva
Copy link
Collaborator

@MrGeva MrGeva commented Aug 18, 2025

  • Relaxed the perf relative threshold vs. PT BE to be 30%
  • Relaxed the allowed extra memory consumption to 2700MB (was 2500MB)
  • Removed redundant assert on the post mem size and allowed any mem decrease > 0 post fwd pass
  • Added documentation

Summary by CodeRabbit

  • Tests
    • Adjusted GPU memory validation thresholds in benchmarking tests to account for higher temporary usage (2.7 GB), refining acceptable free-memory ranges and pass/fail criteria.
  • Documentation
    • Expanded test descriptions to clarify evaluated memory metrics and validation checks.
  • Chores
    • No user-facing or API changes; updates limited to tests and their documentation.

Description

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 18, 2025

📝 Walkthrough

Walkthrough

Update to a unit test: adjust memory expectation constants (extra_consumption_mb 2500→2700), change post-pass free-memory range computation to equal pre-pass, rename validation variable to memory_reduction and message text, and change default backend_relative_tolerance from 0.2 to 0.3. Docstring expanded to document parsed metrics and checks.

Changes

Cohort / File(s) Summary of Changes
Test file edits
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
- Increase extra_consumption_mb 2500 → 2700 in calculate_expected_kv_cache_metrics(free_mem_ratio).
- Simplify post-pass free-memory range to equal pre-pass range (remove offsets).
- Rename memory_consumedmemory_reduction in validate_kv_cache_metrics_dynamic, update assertion/message and print.
- Change default backend_relative_tolerance 0.2 → 0.3 in trtllm_bench_unified_comparison signature.
- Expand test_trtllm_bench_backend_comparison docstring to describe parsed metrics (current_cache_size, free_mem_pre_mb, free_mem_post_mb, new_cache_size), extra_consumption_mb = 2700, and the validation checks.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related issues

Possibly related PRs

Suggested labels

Documentation

Suggested reviewers

  • chzblych
  • niukuo
  • yilin-void
  • pamelap-nvidia

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai or @coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@MrGeva
Copy link
Collaborator Author

MrGeva commented Aug 18, 2025

/bot run

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (3)

282-286: Promote extra_consumption_mb to a named constant for clarity and consistency

The relaxed threshold to 2700 MB looks fine. To avoid drift between code and documentation and to make the intent explicit, consider using an UPPER_SNAKE_CASE constant and referencing it in the range calculation.

Apply this diff:

-            extra_consumption_mb = 2700
-            expected_free_mem_range = (
-                total_mem_mb - estimated_model_size_mb - extra_consumption_mb,
+            EXTRA_CONSUMPTION_MB = 2700
+            expected_free_mem_range = (
+                total_mem_mb - estimated_model_size_mb - EXTRA_CONSUMPTION_MB,
                 total_mem_mb - estimated_model_size_mb,
             )

609-628: Docstring formulas: fix units (MB→bytes) and align metric names/signs with code

Great added documentation. A few nits to prevent confusion:

  • Use the same metric names as the code (free_mem_pre_mb/free_mem_post_mb).
  • Step 3 should be pre - post, not post - pre.
  • Include MB→bytes conversion in the new_cache_size formula to match the parser/validator.

Apply this diff:

-    1. free_mem_pre_fw_pass is in:
-       [Total mem - expected_model_size - extra_consumption, Total mem - expected_model_size]
-    2. free_mem_post_fw_pass is in:
-       [Total mem - expected_model_size  - extra_consumption - 1000, Total mem - expected_model_size - 500]
-    3. free_mem_post_fw_pass -  free_mem_pre_fw_pass < 5000
-    4. expected_new_cache = free_mem_post * free_mem_ratio + current_cache_size
+    1. free_mem_pre_mb is in:
+       [Total mem - expected_model_size - extra_consumption_mb, Total mem - expected_model_size]
+    2. free_mem_post_mb is in:
+       [Total mem - expected_model_size - extra_consumption_mb - 1000, Total mem - expected_model_size - 500]
+    3. 0 < free_mem_pre_mb - free_mem_post_mb < 5000
+    4. expected_new_cache = free_mem_post_mb * 1024 * 1024 * free_mem_ratio + current_cache_size
        cache_size_diff = abs(new_cache_size - expected_new_cache) / expected_new_cache
        assert cache_size_diff <= 0.01

-    extra_consumption_mb = 2700 - this is unexplained memory consumption to be investigated.
+    extra_consumption_mb = 2700 — unexplained memory consumption (see https://github.com/NVIDIA/TensorRT-LLM/issues/6335).

1-6: Missing NVIDIA copyright header

Per repo guidelines, Python sources should include the NVIDIA copyright header (current year). This file is missing it.

Do you want me to add the standard header used in this repo to the file prolog?

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 69ff32f and b864a0c.

📒 Files selected for processing (1)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py

@MrGeva MrGeva changed the title https://nvbugspro.nvidia.com/bug/5458798: Relaxed test threshold, added documentation [https://nvbugs/5458798][fix]: Relaxed test threshold, added documentation Aug 18, 2025
Signed-off-by: Eran Geva <[email protected]>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (1)

1-6: Add the mandatory NVIDIA copyright header.

Per the coding guidelines, prepend the NVIDIA copyright header to all Python sources.

Apply at the very top of the file:

+# Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+
 import json
 import re
 import subprocess
 import tempfile
 from pathlib import Path
🧹 Nitpick comments (6)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (6)

281-286: Increasing extra_consumption_mb to 2700 MB: add a floor to avoid negative ranges.

The widened slack looks fine but can produce a negative lower bound on small GPUs. Clamp to 0 to avoid invalid ranges.

-            expected_free_mem_range = (
-                total_mem_mb - estimated_model_size_mb - extra_consumption_mb,
-                total_mem_mb - estimated_model_size_mb,
-            )
+            lower = max(total_mem_mb - estimated_model_size_mb - extra_consumption_mb, 0)
+            expected_free_mem_range = (
+                lower,
+                total_mem_mb - estimated_model_size_mb,
+            )

Optional verification: please run on the smallest CI GPU type we support to ensure the lower bound is non-negative in practice.


292-294: Post-forward free memory range should allow a small additional drop to reduce flakiness.

Making post == pre range tightens the check. Allowing a modest cushion (e.g., 0.5–1.0 GB) better reflects transient allocator behavior during forward passes.

-            expected_free_mem_post_range = expected_free_mem_range
+            # Allow a small additional drop during forward pass to reduce flakiness
+            expected_free_mem_post_range = (
+                max(expected_free_mem_range[0] - 1024, 0),  # -1 GB
+                max(expected_free_mem_range[1] - 512, 0),   # -0.5 GB
+            )

347-354: Permit small allocator jitter in “memory reduction” check.

A strict > 0 can be flaky due to allocator noise and async telemetry. Allow a small negative jitter (e.g., -50 MB) while still flagging true regressions.

-    if free_mem_pre and free_mem_post:
-        memory_reduction = free_mem_pre - free_mem_post
-        assert memory_reduction > 0, (
-            f"Expected memory reduction during forward pass, got {memory_reduction}MB"
-        )
-        print(f"  ✅ Memory reduction during forward pass: {memory_reduction}MB")
+    if free_mem_pre and free_mem_post:
+        memory_reduction = free_mem_pre - free_mem_post
+        min_reduction_mb = -50  # allow small allocator jitter
+        assert memory_reduction > min_reduction_mb, (
+            f"Expected memory reduction during forward pass (allowing jitter {min_reduction_mb}MB), "
+            f"got {memory_reduction}MB"
+        )
+        print(f"  ✅ Memory reduction during forward pass: {memory_reduction}MB")

605-624: Docstring: align terminology with code keys and clarify units.

Use the exact metric names and note the MB→bytes conversion in new_cache_size to avoid confusion.

-    """Test that compares autodeploy backend performance against pytorch backend
-    with given relative and absolute thresholds.
-
-    It also checks the memory footprint of the autodeploy backend by parsing the
-    log output from the resize_kv_cache function and extracting the following metrics:
-    current_cache_size - the cache size before resize
-    free_mem_pre_mb - the free memory before forward pass
-    free_mem_post_mb - the free memory after forward pass
-    new_cache_size - the cache size after resize
-
-    The following checks are performed:
-    1. free_mem_pre_fw_pass and free_mem_post_fw_pass are in:
-       [Total mem - expected_model_size - extra_consumption, Total mem - expected_model_size]
-    2. memory_reduction = free_mem_pre_fw_pass - free_mem_post_fw_pass > 0
-    3. expected_new_cache = free_mem_post * free_mem_ratio + current_cache_size
-       cache_size_diff = abs(new_cache_size - expected_new_cache) / expected_new_cache
-       assert cache_size_diff <= 0.01
-
-    extra_consumption_mb = 2700 - this is unexplained memory consumption to be investigated.
-    """
+    """Test autodeploy vs. PyTorch backend with performance and memory-footprint validation.
+
+    Parses resize_kv_cache logs and extracts:
+      - current_cache_size (bytes): cache size before resize
+      - free_mem_pre_mb (MB): free memory before forward pass
+      - free_mem_post_mb (MB): free memory after forward pass
+      - new_cache_size (bytes): cache size after resize
+
+    Checks performed:
+      1) free_mem_pre_mb and free_mem_post_mb are in:
+         [TotalMB - expected_model_size_mb - extra_consumption_mb, TotalMB - expected_model_size_mb]
+      2) memory_reduction = free_mem_pre_mb - free_mem_post_mb > 0 (allowing small allocator jitter)
+      3) new_cache_size matches:
+           expected_new_cache = free_mem_post_mb * 1024 * 1024 * free_mem_ratio + current_cache_size
+         with relative error <= 1%
+
+    Note: extra_consumption_mb = 2700 is a temporary allowance pending investigation.
+    """

409-414: require_metrics=False path still asserts — make it non-fatal as intended.

The docstring says “just warn,” but the else-branches assert False. Return a neutral value instead so callers can opt into non-fatal behavior.

-        else:
-            print(f"ℹ️ {message}")
-            assert False, "KV cache metrics are missing"
+        else:
+            print(f"ℹ️ {message}")
+            return None, None
@@
-        else:
-            print(f"ℹ️ KV cache validation skipped - {message}")
-            assert False, "KV cache metrics are missing"
+        else:
+            print(f"ℹ️ KV cache validation skipped - {message}")
+            return None, None

Also applies to: 427-429


67-71: Use sys.executable instead of hardcoding "python"/"python3".

This ensures the subprocess uses the same interpreter as pytest, avoids PATH issues, and keeps consistency.

Example change (outside selected ranges):

import sys

# In run_benchmark()
cmd = [
    sys.executable,
    "-m",
    "tensorrt_llm.commands.bench",
    # ...
]

# In prepare_dataset()
command = [
    sys.executable,
    str(dataset_tool),
    "--stdout",
    # ...
]

Also applies to: 239-241

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between b864a0c and 9b52c33.

📒 Files selected for processing (1)
  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+
Python indentation: 4 spaces, no tabs
Maintain module namespace in imports (from package.subpackage import foo; then use foo.SomeClass())
Python file names use snake_case
Python class names use PascalCase
Python functions/methods and local variables use snake_case; variables starting with a number get k_ prefix (e.g., k_99th_percentile)
Global variables use G_ prefixed UPPER_SNAKE_CASE (e.g., G_MY_GLOBAL)
Constants use UPPER_SNAKE_CASE in Python
Avoid shadowing variables from outer scopes in Python
Initialize all externally visible members of a Python class in init
Prefer docstrings for interfaces used outside a file; comments for local code
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes/variables inline with short docstrings
Avoid reflection when simple alternatives exist (e.g., prefer explicit parameters over dict(**locals()))
In try/except, catch the narrowest exceptions possible
For duck-typing with try/except, keep try body minimal and put logic in else

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
🧠 Learnings (1)
📚 Learning: 2025-07-28T17:06:08.621Z
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.

Applied to files:

  • tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py (2)

282-282: LGTM: Relaxing extra_consumption_mb to 2700 MB.

Given recent allocator behavior, this bump looks reasonable and matches the docstring note.


448-458: Confirm increase of backend_relative_tolerance to 30%

We raised the default from 20% → 30% in trtllm_bench_unified_comparison (tests/unittest/_torch/auto_deploy/unit/singlegpu/test_ad_trtllm_bench.py:447–454). A quick search in this file found:

  • Default signature now uses backend_relative_tolerance=0.3
  • One explicit call at line 625 relies on the new default
  • Docstring at lines 469–473 doesn’t mention the old value

Please verify that:

  • A 30% tolerance is acceptable given our hardware variability
  • Any documentation or tests outside this file that reference the previous 20% threshold are updated accordingly (e.g., global search for “0.2” or “20%” in your docs/tests)

@MrGeva
Copy link
Collaborator Author

MrGeva commented Aug 18, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15619 [ run ] triggered by Bot

@suyoggupta
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15641 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15619 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15641 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11775 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@MrGeva MrGeva changed the title [https://nvbugs/5458798][fix]: Relaxed test threshold, added documentation [https://nvbugs/5458798][fix] Relaxed test threshold, added documentation Aug 19, 2025
@suyoggupta suyoggupta merged commit 636c622 into NVIDIA:main Aug 19, 2025
4 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants