Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
113 commits
Select commit Hold shift + click to select a range
ac7e888
docs: fix helm chart urls (#2033)
nealvaidya Jul 21, 2025
76fd471
refactor: support for turning prefix cache off (#2034)
alec-flowers Jul 22, 2025
4449f3d
fix: never sleep on the eos (#2039)
alec-flowers Jul 22, 2025
20c5daf
fix: install torch distribution matching container cuda version (#2027)
ptarasiewiczNV Jul 22, 2025
e5a8628
feat: add a hierarchical Prometheus MetricsRegistry trait for Distrib…
keivenchang Jul 22, 2025
7882693
feat: use atomic transactions when creating etcd kv (#2044)
PeaBrane Jul 22, 2025
d65ce1b
chore(sglang): Move examples/sglang to components/backends/sglang (#2…
grahamking Jul 22, 2025
73505c7
fix: correct Nixl plugin paths in Dockerfile. (#2048)
karya0 Jul 22, 2025
c49a13e
docs: Cleanup index.rst (#2007)
atchernych Jul 22, 2025
9f2356c
chore: Remove unused portion of kv bindings test (#2052)
rmccorm4 Jul 22, 2025
f3e3d94
refactor: vLLM to new Python UX (#1983)
alec-flowers Jul 22, 2025
9cfaa7b
chore: Bump genai-perf to v0.0.15 (#2051)
ptarasiewiczNV Jul 22, 2025
22e6c96
chore: Change vllm K8s from dynamo-run to python -m dynamo.frontend (…
grahamking Jul 22, 2025
b127d95
feat: health check changes based on endpoint served (#1996)
nnshah1 Jul 23, 2025
1958b3a
build: Fixes for vLLM Blackwell Builds (#2020)
zaristei Jul 23, 2025
2c642fd
fix: vllm deployment examples (#2062)
biswapanda Jul 23, 2025
6a69ef4
fix: cryptic error message for empty messages list in /chat/completio…
heisenberglit Jul 23, 2025
c6f12f6
ci: Add RUN_SGLANG to CI variables (#1928)
pvijayakrish Jul 23, 2025
e0a5194
feat: Connect Library (#1478)
whoisj Jul 23, 2025
ffb5409
fix: endpoint changes should be prioritized over new requests in kv s…
PeaBrane Jul 23, 2025
eebc741
docs: Adjust the path to examples (#2056)
atchernych Jul 23, 2025
f9b1757
fix: Bring back ignore_eos/min_tokens support in trtllm component (#2…
rmccorm4 Jul 23, 2025
66b7d2c
fix: updates versions and adds ahashmap to BPE (#2072)
paulhendricks Jul 23, 2025
9bdceac
fix: github ci triggers (#2075)
biswapanda Jul 23, 2025
7a0013b
chore: update attributions for 0.3.2 release (#1837) (#2032)
nv-anants Jul 23, 2025
13560ab
feat: sglang examples launch and deploy (#2068)
biswapanda Jul 23, 2025
f3d784f
feat: query instance_id based on routing strategy (#1787)
biswapanda Jul 23, 2025
3c500ae
docs: Update docs for new UX (#2070)
grahamking Jul 23, 2025
19a77ae
chore(dynamo-run): Remove out=sglang|vllm|trtllm (#1920)
grahamking Jul 24, 2025
ee3a8e4
feat: add initial Grove support (#2012)
julienmancuso Jul 24, 2025
cde8db3
docs: Replace a sym link with and actual markdown link (#2074)
atchernych Jul 24, 2025
13d3cc1
feat: add nixl benchmark deployment instructions (#2060)
biswapanda Jul 24, 2025
2fc65ad
feat: dump radix tree as router events (#2057)
PeaBrane Jul 24, 2025
ba3ac23
test: add router e2e test with mockers to per-merge ci (#2073)
PeaBrane Jul 24, 2025
fe718fd
feat: deploy SLA profiler to k8s (#2030)
hhzhang16 Jul 24, 2025
a2874fd
feat: add possibility to use grove in dynamo graph helm chart (#1954)
julienmancuso Jul 24, 2025
f03f8be
docs: hello_world python binding example (#2083)
nealvaidya Jul 24, 2025
2bbbd44
chore: Remove unused trtllm requirements.txt (#2098)
rmccorm4 Jul 24, 2025
f0e382a
fix: Merge env vars correctly (#2096)
julienmancuso Jul 24, 2025
3094278
docs: Create a guide for writing dynamo deployments CR (#1999)
atchernych Jul 24, 2025
ff92053
docs: add NAMESPACE (#2105)
atchernych Jul 25, 2025
a2cb1c3
feat: update python packaging for new dynamo UX (#2054)
grahamking Jul 25, 2025
24cb926
docs: Clean index.rst (#2104)
atchernych Jul 25, 2025
412a12a
fix: rm enforce eager from vllm deploy - prefer perf over pod launch …
biswapanda Jul 25, 2025
2cd96ec
build: Add TensorRT-LLM to optional dependency and corresponding inst…
tanmayv25 Jul 25, 2025
384e449
fix: agg router test (#2123)
alec-flowers Jul 25, 2025
4dc529a
chore: remove vLLM v0 multimodal example (#2099)
GuanLuo Jul 25, 2025
4498a77
fix: move docker-compose.yml to deploy/, and update frontend port (#2…
keivenchang Jul 25, 2025
222245e
refactor: Move engine and publisher from dynamo.llm.tensorrt_llm to d…
tanmayv25 Jul 26, 2025
b8461b6
chore: updated health checks to use new probes (#2124)
nnshah1 Jul 27, 2025
e2a514b
fix: remove prints (#2142)
alec-flowers Jul 28, 2025
615580d
feat: Base metrics: add generic ingress handler metrics (#2090)
keivenchang Jul 28, 2025
e82bc4e
chore: update vLLM to 0.10.0 (#2114)
ptarasiewiczNV Jul 28, 2025
803bfa8
feat: proper local hashes for mockers + router watches endpoints (#2132)
PeaBrane Jul 28, 2025
0cb01b3
feat: updates to structured logging (#2061)
nnshah1 Jul 28, 2025
ca0035f
fix: copy whole workspace for pre-merge vllm tests (#2146)
nv-anants Jul 28, 2025
d23d48b
feat: Deploy SLA planner to Kubernetes (#2135)
hhzhang16 Jul 28, 2025
708d7c3
docs: add Llama4 eagle3 one model example and configs (#2087)
jhaotingc Jul 28, 2025
096d117
docs: update router docs (#2148)
PeaBrane Jul 28, 2025
1e6709d
feat: allow to override any podSpec property (#2116)
julienmancuso Jul 28, 2025
f809659
docs: hello world deploy example (#2102)
atchernych Jul 28, 2025
cfc6178
feat: add sglang disagg deployment examples (#2137)
biswapanda Jul 28, 2025
bbe8dbb
fix: remove containers from required property of extraPodSpec (#2153)
julienmancuso Jul 28, 2025
fdcf611
chore: Add Request Migration docs and minor enhancements (#2038)
kthui Jul 28, 2025
095ea3e
chore: updating and removing tests (#2130)
nnshah1 Jul 29, 2025
4747790
feat: deprecate sdk as dependency (#2149)
biswapanda Jul 29, 2025
3175b10
docs: Update to README.md (#2141)
athreesh Jul 29, 2025
7fbd43a
docs: Update dynamo_glossary.md (#2082)
athreesh Jul 29, 2025
358e908
docs: Adding document for running Dynamo on Azure Kubernetes Services…
saurabh-nvidia Jul 29, 2025
195c4c4
docs: Quickstart with new UX (#2005)
nealvaidya Jul 29, 2025
291df28
docs: add disagg example + explanation (#2086)
nealvaidya Jul 29, 2025
ca5b681
docs: add multinode example (#2155)
nealvaidya Jul 29, 2025
a8cb655
docs: update readme install instructions (#2170)
nv-anants Jul 29, 2025
5be23eb
Readmes + eks additions (#2157)
athreesh Jul 29, 2025
2befa38
feat: claim support for AL2023 x86_64 (#2150)
saturley-hall Jul 29, 2025
e542f00
chore: cleanup examples codeowners (#2171)
nealvaidya Jul 29, 2025
12a7b83
docs: Examples README/restructuring, framework READMEs, EKS examples …
athreesh Jul 29, 2025
8b0a035
docs: Update the operator docs (#2172)
atchernych Jul 29, 2025
8248a11
feat: gaie helm chart based example (#2168)
biswapanda Jul 29, 2025
157714a
chore: add instructions to modify SLA to profile_sla doc; update comp…
tedzhouhk Jul 29, 2025
30d4612
fix: install rdma libs in runtime image. (#2163)
karya0 Jul 29, 2025
da0c572
chore: update sgl version and fix h100 wideep example (#2169)
ishandhanani Jul 30, 2025
4c90b1b
chore: Version bump to 0.4.0 (#2179)
dmitry-tokarev-nv Jul 30, 2025
ee09de0
fix: link to point to bindings/python/README.md (#2186)
keivenchang Jul 30, 2025
dabfea3
chore: address QA broken links comments (#2184)
athreesh Jul 30, 2025
b69c507
fix: add better port logic (#2175)
alec-flowers Jul 30, 2025
7fc94da
fix(container): update sgl dockerfile install commands (#2194)
ishandhanani Jul 30, 2025
57482dc
docs: Bug 5424387 (#2196)
atchernych Jul 30, 2025
f3868b1
fix: support config without resource limit for profile sla script (#2…
tedzhouhk Jul 31, 2025
f8b0a5a
feat: Add trtllm deploy examples for k8s (#2133)
tanmayv25 Jul 31, 2025
62c7898
fix: add curl and jq for health checks (#2203)
biswapanda Jul 31, 2025
c546b63
fix: update SGLang version in instructions and Dockerfile to revert t…
ishandhanani Jul 31, 2025
97390ac
fix(k8s): sglang disagg now uses decode worker (#2206)
ishandhanani Jul 31, 2025
f10aab3
fix: Migrating trtllm examples from `1.0.0rc0` to `1.0.4rc4` (#2217)
KrishnanPrash Jul 31, 2025
3bf22bb
feat: reorganize sglang and add expert distribution endpoints (#2181)
ishandhanani Jul 31, 2025
bae25dc
feat: skip downloading model weights if using mocker (only tokenizer)…
PeaBrane Jul 31, 2025
cbc0e20
fix: fix endpoint run to return error DIS-325 (#2156)
keivenchang Jul 31, 2025
625578c
chore: update nixl version to 0.4.1 (#2221)
nv-anants Jul 31, 2025
7e3b3fa
fix: Add default configs in LLMAPI. Fixes OOM issues (#2198)
tanmayv25 Jul 31, 2025
f10e44c
fix: Integration tests fixes (#2161)
keivenchang Jul 31, 2025
f14f59c
chore: Remove multimodal readme. (#2212)
krishung5 Jul 31, 2025
dbd33df
fix: handle groveTerminationDelay and auto-detect grove installation …
julienmancuso Aug 1, 2025
66231cf
feat: reduce / revert routing overheads, do not consider output token…
PeaBrane Aug 1, 2025
8c75ed7
fix: frontend metrics to be renamed from nv_llm_http_service_* => dyn…
keivenchang Aug 1, 2025
1ad6abe
feat: add sgl deploy readme (#2238)
ishandhanani Aug 1, 2025
efd863d
fix: dynamo_component to be added in metric names (#2180)
keivenchang Aug 1, 2025
faafa5f
docs: add a docs/guides/metrics.md (#2160)
keivenchang Aug 1, 2025
cb1492a
rebase main
ziqifan617 Aug 1, 2025
ae51b3f
test: Request Migration Docs and E2E vLLM Tests (#2177)
kthui Aug 1, 2025
959f810
feat: sglang + gb200 (#2223)
ishandhanani Aug 1, 2025
fa492bb
docs: Dyn 591 (#2247)
atchernych Aug 2, 2025
357f34b
cleanup (#2250)
ziqifan617 Aug 2, 2025
2954005
Merge branch 'main' into ziqi/connector-250801
ziqifan617 Aug 2, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
test: add router e2e test with mockers to per-merge ci (#2073)
Signed-off-by: Yan Ru Pei <[email protected]>
  • Loading branch information
PeaBrane authored Jul 24, 2025
commit ba3ac23560cb4a986b0e26c87162b68a778da286
2 changes: 1 addition & 1 deletion lib/llm/src/kv_router.rs
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ impl KvRouter {
}
};
if let Err(e) = kv_events_tx.send(event).await {
tracing::debug!(
tracing::warn!(
"failed to send kv event to indexer; shutting down: {:?}",
e
);
Expand Down
7 changes: 7 additions & 0 deletions lib/llm/src/kv_router/scheduler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -177,6 +177,13 @@ impl KvScheduler {
request.respond(response);
continue 'outer;
}
Err(KvSchedulerError::NoEndpoints) => {
tracing::trace!("no endpoints available; waiting for endpoints update");
endpoints_rx.changed().await.ok();
endpoints = endpoints_rx.borrow_and_update().clone();
pending_endpoint_update = Some(endpoints.worker_ids());
continue;
}
// TODO: this is not actually hooked up
Err(KvSchedulerError::AllWorkersBusy) => {
tracing::trace!("all workers busy; waiting for more capacity");
Expand Down
206 changes: 121 additions & 85 deletions lib/llm/src/mocker/scheduler.rs
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ use std::collections::HashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use tokio::sync::{mpsc, Mutex};
use tokio::time::{interval, Duration};
use tokio::time::Duration;
use tokio_util::sync::CancellationToken;
use uuid::Uuid;

Expand Down Expand Up @@ -81,6 +81,10 @@ impl SchedulerState {
}
}

fn is_empty(&self) -> bool {
self.requests.is_empty()
}

/// Create a new UUID for a DirectRequest, add it to requests, and push the UUID to waiting.
fn receive(&mut self, request: DirectRequest) -> Uuid {
// Use the provided UUID if available, otherwise generate a new one
Expand Down Expand Up @@ -295,11 +299,25 @@ impl Scheduler {

// Spawn main background task with cancellation token
tokio::spawn(async move {
let mut schedule_interval = interval(Duration::from_secs_f64(1e-3));
let mut simulate_interval = interval(Duration::from_secs_f64(1e-4));
let mut should_schedule = true;

loop {
{
let state_guard = state_clone.lock().await;

// Enqueue new request, blocks until at least one is received, so no redundant work is done
// TODO: clean this up? double lock acquisition is ugly, but needed to not hold the lock forever
if state_guard.is_empty() {
drop(state_guard);
let Some(request) = request_rx.recv().await else {
tracing::warn!("request sender is dropped");
break;
};
let mut state_guard = state_clone.lock().await;
state_guard.receive(request);
}
}

tokio::select! {
biased;

Expand All @@ -310,7 +328,7 @@ impl Scheduler {
}

// Try Scheduling Requests - runs on normal interval or after simulation
_ = schedule_interval.tick() => {
_ = tokio::task::yield_now() => {
// Skip if we just ran scheduling after simulation to prevent consecutive runs
if !should_schedule {
continue;
Expand Down Expand Up @@ -371,100 +389,117 @@ impl Scheduler {
_ = cancel_token_clone.cancelled() => {
break;
}
}

// Simulate running requests (prefill + decode)
_ = simulate_interval.tick() => {
let mut state_guard = state_clone.lock().await;
let mut kv_manager_guard = kv_manager_clone.lock().await;

// Base time needed for decoding using active percentage and quadratic formula
let active_perc = kv_manager_guard.get_active_perc();
let decoding_time = -5.47 * active_perc.powi(2) + 43.88 * active_perc + 19.44;
let mut total_time = Duration::from_secs_f64(decoding_time / 1000.0);

// Process prefilling
while let Some((prefill_compute, maybe_creation_signal, is_full_prefill)) = state_guard.try_prefill() {
// NOTE: Prefill cost/time is always incremented for new blocks, even if they
// could be cached by other requests in the same batch. This matches vLLM behavior.
total_time += Duration::from_secs_f64(prefill_compute / 1000.0);

if let Some(creation_signal) = maybe_creation_signal {
if !process_signals(&mut kv_manager_guard, std::slice::from_ref(&creation_signal)) {
panic!("Block allocation for prefilling cannot fail.");
}

// Drain KV events and forward to relay after prefill signal processing
if let (Some(ref relay_tx), Some(ref mut rx)) = (&kv_events_tx, &mut block_resp_rx) {
while let Ok(event) = rx.try_recv() {
let _ = relay_tx.send(block_response_to_kv_event(event));
}
}
};

// Impossible to schedule more prefills if we encounter one incomplete (chunked) prefill
if !is_full_prefill { break; }
// Simulates prefill + decode
let mut state_guard = state_clone.lock().await;
let mut kv_manager_guard = kv_manager_clone.lock().await;

// Base time needed for decoding using active percentage and quadratic formula
let active_perc = kv_manager_guard.get_active_perc();
let decoding_time = -5.47 * active_perc.powi(2) + 43.88 * active_perc + 19.44;
let mut total_time = Duration::from_secs_f64(decoding_time / 1000.0);

// Process prefilling
while let Some((prefill_compute, maybe_creation_signal, is_full_prefill)) =
state_guard.try_prefill()
{
// NOTE: Prefill cost/time is always incremented for new blocks, even if they
// could be cached by other requests in the same batch. This matches vLLM behavior.
total_time += Duration::from_secs_f64(prefill_compute / 1000.0);

if let Some(creation_signal) = maybe_creation_signal {
if !process_signals(
&mut kv_manager_guard,
std::slice::from_ref(&creation_signal),
) {
panic!("Block allocation for prefilling cannot fail.");
}

state_guard.reset_active_tokens();

// Process decoding
let uuids: Vec<Uuid> = state_guard.decode.keys().cloned().collect();
if !uuids.is_empty() {should_schedule = true};
for uuid in uuids {
let Some(sequence) = state_guard.run(uuid) else {
continue;
};
let signals = sequence.generate();

// Process all signals with the KvManager
// Handling of preemption on failure
if !process_signals(&mut kv_manager_guard, &signals) {
sequence.pop(); // revert the failed generation op
for signal in state_guard.preempt() {
kv_manager_guard.process(&signal);
}
continue;
// Drain KV events and forward to relay after prefill signal processing
if let (Some(ref relay_tx), Some(ref mut rx)) =
(&kv_events_tx, &mut block_resp_rx)
{
while let Ok(event) = rx.try_recv() {
let _ = relay_tx.send(block_response_to_kv_event(event));
}
}
};

// Drain KV events and forward to relay after decode signal processing
if let (Some(ref relay_tx), Some(ref mut rx)) = (&kv_events_tx, &mut block_resp_rx) {
while let Ok(event) = rx.try_recv() {
let _ = relay_tx.send(block_response_to_kv_event(event));
}
}
// Impossible to schedule more prefills if we encounter one incomplete (chunked) prefill
if !is_full_prefill {
break;
}
}

// Check completion and send notification
let is_complete = sequence.generated_tokens() >= sequence.max_output_tokens();
let should_output = sequence.generated_tokens() > sequence.already_generated_tokens();
state_guard.reset_active_tokens();

// Process decoding
let uuids: Vec<Uuid> = state_guard.decode.keys().cloned().collect();
if !uuids.is_empty() {
should_schedule = true
};
for uuid in uuids {
let Some(sequence) = state_guard.run(uuid) else {
continue;
};
let signals = sequence.generate();

// Process all signals with the KvManager
// Handling of preemption on failure
if !process_signals(&mut kv_manager_guard, &signals) {
sequence.pop(); // revert the failed generation op
for signal in state_guard.preempt() {
kv_manager_guard.process(&signal);
}
continue;
}

let mut send_failed = false;
if should_output {
send_failed = output_tx_clone.as_ref().is_some_and(|tx| {
tx.send(OutputSignal { uuid, completed: is_complete }).is_err()
});
}
// Drain KV events and forward to relay after decode signal processing
if let (Some(ref relay_tx), Some(ref mut rx)) =
(&kv_events_tx, &mut block_resp_rx)
{
while let Ok(event) = rx.try_recv() {
let _ = relay_tx.send(block_response_to_kv_event(event));
}
}

if send_failed {
for signal in &sequence.free_signal() {
kv_manager_guard.process(signal);
}
}
// Check completion and send notification
let is_complete = sequence.generated_tokens() >= sequence.max_output_tokens();
let should_output =
sequence.generated_tokens() > sequence.already_generated_tokens();

let mut send_failed = false;
if should_output {
send_failed = output_tx_clone.as_ref().is_some_and(|tx| {
tx.send(OutputSignal {
uuid,
completed: is_complete,
})
.is_err()
});
}

if send_failed || is_complete {
state_guard.complete(&uuid);
continue;
}
if send_failed {
for signal in &sequence.free_signal() {
kv_manager_guard.process(signal);
}
}

// Sleep once for the adjusted duration
drop(kv_manager_guard);
drop(state_guard);
let adjusted_time = Duration::from_secs_f64(total_time.as_secs_f64() / args.speedup_ratio);
if adjusted_time.as_millis() > 0 {
tokio::time::sleep(adjusted_time).await;
}
if send_failed || is_complete {
state_guard.complete(&uuid);
continue;
}
}

// Sleep once for the adjusted duration
drop(kv_manager_guard);
drop(state_guard);
let adjusted_time =
Duration::from_secs_f64(total_time.as_secs_f64() / args.speedup_ratio);
if adjusted_time.as_millis() > 0 {
tokio::time::sleep(adjusted_time).await;
}
}
});

Expand Down Expand Up @@ -632,6 +667,7 @@ mod tests {
use super::*;
use rstest::rstest;
use std::time::Duration;
use tokio::time::interval;

#[rstest]
#[case::case_1(false, false, false)]
Expand Down
72 changes: 72 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,66 @@
datefmt=DATE_FORMAT, # ISO 8601 UTC format
)

# List of models used in tests
TEST_MODELS = [
"Qwen/Qwen3-0.6B",
"deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"llava-hf/llava-1.5-7b-hf",
]


def download_models(model_list=None):
"""Download models - can be called directly or via fixture

Args:
model_list: List of model IDs to download. If None, downloads TEST_MODELS.
"""
if model_list is None:
model_list = TEST_MODELS

# Check for HF_TOKEN in environment
hf_token = os.environ.get("HF_TOKEN")
if hf_token:
logging.info("HF_TOKEN found in environment")
else:
logging.warning(
"HF_TOKEN not found in environment. "
"Some models may fail to download or you may encounter rate limits. "
"Get a token from https://huggingface.co/settings/tokens"
)

try:
from huggingface_hub import snapshot_download

for model_id in model_list:
logging.info(f"Pre-downloading model: {model_id}")

try:
# Download the full model snapshot (includes all files)
# HuggingFace will handle caching automatically
snapshot_download(
repo_id=model_id,
token=hf_token,
)
logging.info(f"Successfully pre-downloaded: {model_id}")

except Exception as e:
logging.error(f"Failed to pre-download {model_id}: {e}")
# Don't fail the fixture - let individual tests handle missing models

except ImportError:
logging.warning(
"huggingface_hub not installed. "
"Models will be downloaded during test execution."
)


@pytest.fixture(scope="session")
def predownload_models():
"""Fixture wrapper around download_models for all TEST_MODELS"""
download_models()
yield


@pytest.fixture(autouse=True)
def logger(request):
Expand Down Expand Up @@ -64,6 +124,18 @@ def pytest_collection_modifyitems(config, items):
if "tensorrtllm" in item.keywords:
item.add_marker(skip_tensorrtllm)

# Auto-inject predownload_models fixture for serve tests only (not router tests)
# Skip items that don't have fixturenames (like MypyFileItem)
if hasattr(item, "fixturenames"):
# Only apply to tests in the serve directory
if (
("serve" in str(item.path))
and ("predownload_models" not in item.fixturenames)
and (not item.get_closest_marker("skip_model_download"))
):
item.fixturenames = list(item.fixturenames)
item.fixturenames.append("predownload_models")


class EtcdServer(ManagedProcess):
def __init__(self, request, port=2379, timeout=300):
Expand Down
2 changes: 2 additions & 0 deletions tests/router/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
Loading
Loading