-
Notifications
You must be signed in to change notification settings - Fork 2k
[TRTLLM-6452][feat]: Two-model engine KV cache reuse support #6133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-6452][feat]: Two-model engine KV cache reuse support #6133
Conversation
WalkthroughThe changes update logic for context chunk detection in the LLM request class, remove restrictions and workarounds for KV cache block reuse with speculative decoding in the PyExecutor and its config mangling, and add new and expanded tests to validate KV cache reuse with speculative decoding, including a new test file and additional test cases. Changes
Sequence Diagram(s)sequenceDiagram
participant Tester
participant LLM
participant KVCacheManager
participant DraftModel
participant TargetModel
Tester->>LLM: Initialize with KV cache block reuse enabled
LLM->>KVCacheManager: Configure block reuse
Tester->>LLM: Generate text with speculative decoding (first run, no KV cache)
LLM->>DraftModel: Generate draft tokens
LLM->>TargetModel: Generate target tokens
LLM->>KVCacheManager: Store results in KV cache
Tester->>LLM: Generate text with speculative decoding (second run, with KV cache)
LLM->>KVCacheManager: Reuse cached blocks
LLM->>DraftModel: Generate draft tokens
LLM->>TargetModel: Generate target tokens
Tester->>Tester: Compare outputs for consistency
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: .coderabbit.yaml 📒 Files selected for processing (6)
💤 Files with no reviewable changes (2)
🚧 Files skipped from review as they are similar to previous changes (4)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
|
/bot run |
|
PR_Github #12174 [ run ] triggered by Bot |
fa72ae2 to
5e17c7d
Compare
|
/bot run |
|
PR_Github #12179 [ run ] triggered by Bot |
|
PR_Github #12174 [ run ] completed with state |
|
PR_Github #12179 [ run ] completed with state |
5e17c7d to
034bebe
Compare
034bebe to
135f7b2
Compare
|
/bot run |
|
PR_Github #12298 [ run ] triggered by Bot |
|
PR_Github #12298 [ run ] completed with state |
|
Ok, I think I've convinced myself that this is safe. The scenario I proposed in the previous comment is not possible because The change in behavior is like this:
The amount of mutation we have is still very confusing though. Can you add some comments to |
135f7b2 to
8dfc142
Compare
|
/bot run |
|
PR_Github #12357 [ run ] triggered by Bot |
|
/bot run |
|
PR_Github #12359 [ run ] triggered by Bot |
|
PR_Github #12357 [ run ] completed with state |
|
PR_Github #12359 [ run ] completed with state |
Signed-off-by: ziyixiong-nv <[email protected]>
Signed-off-by: ziyixiong-nv <[email protected]>
8dfc142 to
faed2b1
Compare
|
/bot reuse-pipeline |
|
PR_Github #12361 [ reuse-pipeline ] triggered by Bot |
|
PR_Github #12361 [ reuse-pipeline ] completed with state |
…6133) Signed-off-by: ziyixiong-nv <[email protected]> Signed-off-by: ziyixiong-nv <[email protected]>
…6133) Signed-off-by: ziyixiong-nv <[email protected]> Signed-off-by: ziyixiong-nv <[email protected]>
…6133) Signed-off-by: ziyixiong-nv <[email protected]> Signed-off-by: ziyixiong-nv <[email protected]> Signed-off-by: Shreyas Misra <[email protected]>
Description
As mentioned in #5448 , there's an issue when enabling KV cache reuse for two-model engine.
If there are cached blocks, https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/_torch/pyexecutor/resource_manager.py#L349 would return false because
context_current_position > 0.As a result, when we try to allocate KV cache pages for the draft model, no pages are allocated.
To fix this issue, the checking in
isFirstContextChunk()can be replaced withmContextCurrentPosition == mPrepopulatedPromptLenwheremPrepopulatedPromptLenis the number of tokens already in KV cache.Test Coverage
tests/unittest/_torch/speculative/test_eagle3.py::test_llama_eagle3
tests/unittest/_torch/speculative/test_kv_cache_reuse.py::test_kv_cache_reuse
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.md.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit
Bug Fixes
Tests