-
Notifications
You must be signed in to change notification settings - Fork 2k
[None][chore] Mass integration of release/0.21 (part5) #6544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThis update introduces a local variable for chunked attention size in a CUDA kernel, modifies the calculation of maximum timesteps for shared memory allocation, and expands both documentation and testing for chunked attention and FP8/FP4 support in Llama4 Maverick and Scout models. New integration and end-to-end tests are added. Changes
Sequence Diagram(s)sequenceDiagram
participant TestRunner
participant LLM
participant CUDA_Kernel
TestRunner->>LLM: Load FP8/FP4 prequantized model (with/without chunked prefill)
LLM->>CUDA_Kernel: Run masked_multihead_attention_kernel with params
CUDA_Kernel->>CUDA_Kernel: Set chunked_attention_size from params
CUDA_Kernel->>CUDA_Kernel: Calculate max_timesteps = min(timestep, cyclic_kv_cache_len, chunked_attention_size)
CUDA_Kernel->>CUDA_Kernel: Allocate shared memory for logits buffer
CUDA_Kernel-->>LLM: Return attention results
LLM-->>TestRunner: Evaluate on MMLU and GSM8K
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~15–20 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (6)
🚧 Files skipped from review as they are similar to previous changes (5)
🧰 Additional context used🧠 Learnings (1)📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...Applied to files:
🪛 markdownlint-cli2 (0.17.2)docs/source/performance/perf-overview.md129-129: Tables should be surrounded by blank lines (MD058, blanks-around-tables) ⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
🔇 Additional comments (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
/bot run --disable-fail-fast |
|
PR_Github #13777 [ run ] triggered by Bot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
docs/source/release-notes.md (1)
73-77: Minor grammar / clarity fix in known-issue lineMissing blank before parenthesis and an awkward clause make the sentence harder to read.
Proposed tweak:-In 0.21, full chunked attention support has been added to make sure LLaMA4 model can functionally run with > 8K seq length, while there is a known performance regression(only affect LLaMA4 model) on Hopper due to this functional enhancement. The root cause of the regression has been identified already and the fix will be part of the future release. +In 0.21, full chunked-attention support lets the LLaMA-4 model run with sequence lengths > 8 K. However, this introduces a known performance regression (affects LLaMA-4 on Hopper only). The root cause has been identified and a fix is planned for a future release.tests/integration/test_lists/qa/examples_test_list.txt (1)
539-542: 22 K-token chunked-prefill E2E tests should be moved behind a nightly/soak gateThe four new quick-start cases drive >22 K tokens across 8 GPUs and will dominate CI wall-time (expect >15 min per case on Hopper).
Recommend marking them with the existing
@pytest.mark.nightly(or introducing one) and excluding them from the defaultexamplesstage.
If the intention is perf-tracking rather than functional-regression, hook them into the perf pipeline instead.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h(2 hunks)docs/source/performance/perf-overview.md(5 hunks)docs/source/release-notes.md(1 hunks)tests/integration/defs/accuracy/references/gsm8k.yaml(1 hunks)tests/integration/defs/accuracy/references/mmlu.yaml(1 hunks)tests/integration/defs/accuracy/test_llm_api_pytorch.py(2 hunks)tests/integration/defs/test_e2e.py(1 hunks)tests/integration/test_lists/qa/examples_test_list.txt(2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,h,hpp,cc,cxx}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.{cpp,h,hpp,cc,cxx}: Closing braces of namespaces should have a comment saying the namespace it closes (e.g., } // namespace foo).
Prefer const or constexpr variables over #defines whenever possible.
A variable that is not modified after its initialization should be declared as const.
Except 0 (only used in comparison for checking signness/existence/emptiness) and nullptr, true, false, all other literals should only be used for variable initialization.
Use the Allman indentation style for braces.
Put the semicolon for an empty for or while loop in a new line.
The statement forming the body of a switch, while, do .. while or for statement shall be a compound statement (use brace-delimited statements).
If and else should always be followed by brace-delimited statements, even if empty or a single statement.
C++ filenames should use camel case with first letter lowercase (e.g., thisIsAFilename.cpp), and all files involved in the compilation of a target must have filenames that are case-insensitive unique.
All types (including class names) are camel case with uppercase first letter (e.g., FooBarClass).
Local variables, methods, and namespaces use camel case with first letter lowercase (e.g., localFooBar).
Non-magic-number global variables that are non-static and not defined in anonymous namespace use camel case prefixed by a lower case 'g' (e.g., gDontUseGlobalFoos).
Non-magic-number global variables that are static or defined in an anonymous namespace use camel case prefixed by a lower case 's' (e.g., sMutableStaticGlobal).
Locally visible static variable uses camel case with lowercase prefix 's' as the first letter of the name (e.g., static std::once_flag sFlag;).
Class member variables use camelcase prefixed with an 'm' (e.g., mNbFooValues). Public member variables do not require the 'm' prefix but it is encouraged for clarity.
Enumerations, global constants, static constants at class-scope and function-scope magic-number/literal constants are uppercase snakecas...
Files:
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h
**/*.{h,hpp}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
Use a preprocessor guard in header files. The guard name must have prefix TRTLLM_ followed by the filename, all in caps, and no trailing underscore.
Files:
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h
**/*.{cpp,h,hpp,cc,cxx,cu,py}
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
All TensorRT-LLM Open Source Software code should contain an NVIDIA copyright header that includes the current year. The block should be prepended to the top of all files, including .cpp, .h, .cu, .py, and any other source files which are compiled or interpreted.
Files:
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.htests/integration/defs/test_e2e.pytests/integration/defs/accuracy/test_llm_api_pytorch.py
**/*.py
📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)
**/*.py: Python code should conform to Python 3.8+.
Indent Python code with 4 spaces. Do not use tabs.
Always maintain the namespace when importing in Python, even if only one class or function from a module is used.
Python filenames should use snake_case (e.g., some_file.py).
Python classes should use PascalCase (e.g., class SomeClass).
Python functions and methods should use snake_case (e.g., def my_awesome_function():).
Python local variables should use snake_case. Prefix k for variable names that start with a number (e.g., k_99th_percentile = ...).
Python global variables should use upper snake_case and prefix G (e.g., G_MY_GLOBAL = ...).
Python constants should use upper snake_case (e.g., MY_CONSTANT = ...).
Avoid shadowing variables declared in an outer scope in Python.
Initialize all externally visible members of a Python class in the constructor.
For interfaces that may be used outside a Python file, prefer docstrings over comments.
Comments in Python should be reserved for code within a function, or interfaces that are local to a file.
Use Google style docstrings for Python classes and functions, which can be parsed by Sphinx.
Attributes and variables in Python can be documented inline; attribute docstrings will be rendered under the docstring for the class.
Avoid using reflection in Python when functionality can be easily achieved without it.
When using try-except blocks in Python, limit the except to the smallest set of errors possible.
When using try-except blocks to handle multiple possible variable types in Python, keep the body of the try as small as possible, using the else block to implement the logic.
Files:
tests/integration/defs/test_e2e.pytests/integration/defs/accuracy/test_llm_api_pytorch.py
🧠 Learnings (3)
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
tests/integration/test_lists/qa/examples_test_list.txtdocs/source/performance/perf-overview.mddocs/source/release-notes.mdtests/integration/defs/test_e2e.pytests/integration/defs/accuracy/test_llm_api_pytorch.py
📚 Learning: in tensorrt_llm/executor/worker.py, the lora adapter cache optimization logic that checks `is_adapte...
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#5616
File: tensorrt_llm/executor/worker.py:375-384
Timestamp: 2025-07-17T09:01:27.402Z
Learning: In tensorrt_llm/executor/worker.py, the LoRA adapter cache optimization logic that checks `is_adapter_in_cpu_cache()` and conditionally passes None for weights/config has a known race condition issue that cannot be solved with simple error handling or verification checks. This is a known limitation that requires a more comprehensive solution.
Applied to files:
docs/source/release-notes.md
📚 Learning: in tensorrt-llm's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()...
Learnt from: yechank-nvidia
PR: NVIDIA/TensorRT-LLM#6254
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:1201-1204
Timestamp: 2025-07-22T09:22:14.726Z
Learning: In TensorRT-LLM's multimodal processing pipeline, shared tensor recovery using `from_shared_tensor()` is only needed during the context phase. Generation requests reuse the already-recovered tensor data and only need to call `strip_for_generation()` to remove unnecessary multimodal data while preserving the recovered tensors. This avoids redundant tensor recovery operations during generation.
Applied to files:
docs/source/release-notes.md
🪛 markdownlint-cli2 (0.17.2)
docs/source/performance/perf-overview.md
129-129: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
🔇 Additional comments (16)
tests/integration/defs/accuracy/references/mmlu.yaml (1)
69-71: LGTM! Good accuracy preservation with FP8 quantization.The addition of FP8 quantization accuracy reference (86.45) for the Llama-4-Maverick model is consistent with the broader FP8 support integration. The minimal accuracy drop of 0.05 points compared to the baseline (86.40) demonstrates effective quantization implementation.
tests/integration/defs/accuracy/references/gsm8k.yaml (2)
19-21: New FP8 quantization reference added.Added FP8 quantization accuracy reference (83.30) for Llama-4-Maverick model, supporting the expanded quantization algorithm testing in this release.
26-29: Incorrect FP8 accuracy improvement claim for GSM8k
- The FP8 quantization entry for Llama-4-Scout-17B-16E-Instruct records an accuracy of 89.61, a slight 0.09 drop from the unquantized 89.70, not a +9-point jump from 80.37.
- The new NVFP4 entry with 89.00 accuracy is correctly added.
Please update or remove the assertion about a “substantial 9+ point improvement.”
Likely an incorrect or invalid review comment.
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h (2)
1339-1340: Good practice: Extract parameter to local variable.Extracting
params.chunked_attention_sizeto a local variable improves code readability and potentially reduces repeated memory accesses.
1366-1366: Correct chunked attention memory optimization.The updated
max_timestepscalculation now properly constrains shared memory allocation by considering the chunked attention size. Taking the minimum oftimestep,cyclic_kv_cache_len, andchunked_attention_sizeensures optimal memory usage for chunked attention patterns while maintaining correctness.docs/source/performance/perf-overview.md (4)
15-16: LGTM! Helpful addition of benchmarking blog reference.This addition provides users with valuable additional resources for understanding benchmarking with
trtllm-bench.
26-35: Excellent addition of hardware context.This new hardware section provides essential information about the testing environment and properly sets expectations about performance variations on different hardware configurations.
148-149: Important performance guidance for Llama 4.This note provides valuable transparency about version-specific performance issues and gives users clear guidance on reproducing benchmark results.
238-238: LGTM! Version update maintains documentation currency.The update from v0.20 to v0.21 keeps the documentation aligned with the current release.
tests/integration/defs/test_e2e.py (1)
1926-1958: Fix example path and consider adding output validation.
Path Issue: Line 1941 uses
"examples/pytorch"but should likely be"examples/llm-api"based on similar tests in this file (see lines 1602, 1644, etc.).Missing Validation: Unlike other similar tests, this test doesn't include output validation or memory usage checks. Consider adding assertions to verify the test actually validates the chunked prefill functionality.
- example_root = Path(os.path.join(llm_root, "examples", "pytorch")) + example_root = Path(os.path.join(llm_root, "examples", "llm-api"))Consider adding memory usage validation similar to other tests:
with tempfile.NamedTemporaryFile(mode='w+t', suffix=f".{model_name}.log", dir="./", delete=True, delete_on_close=True) as running_log: llm_venv.run_cmd(cmd, stdout=running_log) # Add appropriate memory usage check based on model⛔ Skipped due to learnings
Learnt from: moraxu PR: NVIDIA/TensorRT-LLM#6303 File: tests/integration/test_lists/qa/examples_test_list.txt:494-494 Timestamp: 2025-07-28T17:06:08.621Z Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.tests/integration/defs/accuracy/test_llm_api_pytorch.py (6)
464-485: LGTM: Well-structured chunked prefill test.The test correctly combines FP8 quantization with chunked prefill functionality. The limited parallelism configuration (tp8ep8 only) appears intentional for focused testing of this specific combination.
558-578: LGTM: Appropriate FP8 test for Scout model.The test is well-adapted for the Scout model with appropriate GPU requirements (4 GPUs vs 8 for Maverick) and suitable parallelism configurations.
580-600: LGTM: Appropriate long sequence chunked prefill test.The test correctly uses a higher
max_seq_len=22000for testing chunked prefill with long sequences, which is a key use case for this functionality.
602-622: LGTM: Correct FP4 quantization test implementation.The test properly handles FP4 quantization with appropriate hardware requirements (
skip_pre_blackwell) and correct mixed quantization assertions (NVFP4 for weights, FP8 for KV cache).
442-462: LLVM API: Confirm CUDA Graph Parameter UsageIt’s correct that the new
test_fp8_prequantizeduses the shorthanduse_cuda_graph=cuda_graphinstead ofcuda_graph_config=CudaGraphConfig() if cuda_graph else None. Both forms are supported by the LLM constructor:
- Passing
use_cuda_graph=True/Falsewill auto-generate a defaultCudaGraphConfigunder the hood.- Explicitly passing
cuda_graph_config=…gives fine-grained control over batch sizes and padding.No change is required here—this is an intentional convenience overload, consistent with other integration tests (e.g. earlier FP4 tests).
624-644: No GPU requirement discrepancy for FP4 chunked prefill test.The
@pytest.mark.skip_less_mpi_world_size(8)guard intest_fp4_chunked_prefillmatches the other Blackwell‐only tests (all use 8 as the minimum MPI world size). Keeping it at 8 ensures that these heavyweight configurations only run when the test environment provides at least eight GPUs—consistent with the rest of the Blackwell suite.
7569eb0 to
cc256b4
Compare
|
/bot run --disable-fail-fast |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
docs/source/performance/perf-overview.md (2)
11-11: Fix typo in “situation”.Line 11 reads “situaiton”.
-…performance depending on your situaiton. +…performance depending on your situation.
26-35: Consider converting the GPU bullet list into a table for easier scanning.Readers typically compare specs side-by-side; a table format would make capacity/TDP differences clearer and match the style used elsewhere in the document.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h(2 hunks)docs/source/performance/perf-overview.md(5 hunks)docs/source/release-notes.md(1 hunks)tests/integration/defs/accuracy/test_llm_api_pytorch.py(2 hunks)tests/integration/defs/test_e2e.py(1 hunks)tests/integration/test_lists/qa/examples_test_list.txt(2 hunks)
✅ Files skipped from review due to trivial changes (2)
- docs/source/release-notes.md
- tests/integration/test_lists/qa/examples_test_list.txt
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/integration/defs/test_e2e.py
- cpp/tensorrt_llm/kernels/decoderMaskedMultiheadAttention/decoderMaskedMultiheadAttentionTemplate.h
- tests/integration/defs/accuracy/test_llm_api_pytorch.py
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: in tensorrt-llm testing, it's common to have both cli flow tests (test_cli_flow.py) and pytorch api ...
Learnt from: moraxu
PR: NVIDIA/TensorRT-LLM#6303
File: tests/integration/test_lists/qa/examples_test_list.txt:494-494
Timestamp: 2025-07-28T17:06:08.621Z
Learning: In TensorRT-LLM testing, it's common to have both CLI flow tests (test_cli_flow.py) and PyTorch API tests (test_llm_api_pytorch.py) for the same model. These serve different purposes: CLI flow tests validate the traditional command-line workflow, while PyTorch API tests validate the newer LLM API backend. Both are legitimate and should coexist.
Applied to files:
docs/source/performance/perf-overview.md
🪛 markdownlint-cli2 (0.17.2)
docs/source/performance/perf-overview.md
129-129: Tables should be surrounded by blank lines
(MD058, blanks-around-tables)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Pre-commit Check
|
PR_Github #13781 [ run ] triggered by Bot |
|
PR_Github #13777 [ run ] completed with state |
|
PR_Github #13781 [ run ] completed with state |
Signed-off-by: junq <[email protected]>
Signed-off-by: junq <[email protected]>
Signed-off-by: Ivy Zhang <[email protected]>
Signed-off-by: junq <[email protected]>
…th chunked attention (NVIDIA#6401) Signed-off-by: Perkz Zheng <[email protected]> Co-authored-by: Sharan Chetlur <[email protected]>
Signed-off-by: zpatel <[email protected]>
cc256b4 to
3b63938
Compare
|
/bot skip --comment "already succeeded in previous pipeline" |
crazydemo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
PR_Github #13900 [ skip ] triggered by Bot |
|
PR_Github #13900 [ skip ] completed with state |
Summary by CodeRabbit
Documentation
Tests
Bug Fixes
Description
Test Coverage
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.