Skip to content

Conversation

@GuanLuo
Copy link
Contributor

@GuanLuo GuanLuo commented Sep 5, 2025

Overview:

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • New Features
    • Updated multimodal prompt template to include a system instruction and explicit vision tokens, improving consistency and reliability when handling image inputs.
    • Enhances compatibility with multimodal models by structuring user/assistant turns and image placeholders more clearly, leading to more predictable responses.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 5, 2025

Walkthrough

Updates the Processor's --prompt-template in examples/multimodal/deploy/agg_qwen.yaml from a simple USER/ASSISTANT format to a structured multimodal template including a system instruction and vision tokens (<|vision_start|>, image pad, <|vision_end|>, assistant tag). No other lines in the file are modified.

Changes

Cohort / File(s) Summary of Changes
Multimodal deploy config
examples/multimodal/deploy/agg_qwen.yaml
Replaced Processor --prompt-template string with a system-prefixed, multimodal template using explicit vision tokens and image placeholders; no other changes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant U as User
  participant P as Processor (agg_qwen)
  participant M as Qwen Model

  U->>P: Submit prompt + image
  Note over P: Build multimodal prompt<br/>- System instruction<br/>- User text<br/>- <|vision_start|> image pad <|vision_end|><br/>- Assistant tag
  P->>M: Send formatted prompt
  M-->>P: Generate assistant response
  P-->>U: Return response
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Poem

I nibble words, a prompt so keen,
From USER lines to vision seen—
With system whispers, tags that gleam,
I hop through images in the stream.
Qwen replies, concise and bright—
A carrot-coded, multimodal bite! 🥕✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
examples/multimodal/deploy/agg_qwen.yaml (2)

68-68: Verify newline handling: '\n' may remain literal with current quoting.

The YAML single-quoted scalar plus shell double-quotes typically preserves backslash-n as two characters, not real newlines. If processor.py expects actual line breaks, switch to a YAML block scalar to embed real newlines.

Apply:

-            - 'python3 components/processor.py --model Qwen/Qwen2.5-VL-7B-Instruct --prompt-template "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|><prompt><|im_end|>\n<|im_start|>assistant\n"'
+            - >-
+              python3 components/processor.py --model Qwen/Qwen2.5-VL-7B-Instruct
+              --prompt-template "<|im_start|>system
+              You are a helpful assistant.<|im_end|>
+              <|im_start|>user
+              <|vision_start|><|image_pad|><|vision_end|><prompt><|im_end|>
+              <|im_start|>assistant
+              "

Run-time check suggestion:

  • Log the received template in processor.py (repr-style) to confirm embedded line breaks.

68-68: Consider externalizing the prompt template.

For readability and easier edits, mount a ConfigMap file and use a flag like --prompt-template-file instead of inlining the entire template.

Example change to args (volume/ConfigMap omitted here for brevity):

-            - >-
-              python3 components/processor.py --model Qwen/Qwen2.5-VL-7B-Instruct
-              --prompt-template "<|im_start|>system
-              You are a helpful assistant.<|im_end|>
-              <|im_start|>user
-              <|vision_start|><|image_pad|><|vision_end|><prompt><|im_end|>
-              <|im_start|>assistant
-              "
+            - python3 components/processor.py --model Qwen/Qwen2.5-VL-7B-Instruct --prompt-template-file /etc/prompts/qwen_vl_chat.tmpl
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 0d9c899 and a72636d.

📒 Files selected for processing (1)
  • examples/multimodal/deploy/agg_qwen.yaml (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Build and Test - vllm
  • GitHub Check: Build and Test - dynamo
🔇 Additional comments (2)
examples/multimodal/deploy/agg_qwen.yaml (2)

68-68: Template looks correct for Qwen2.5-VL chat format.

Roles/tokens and trailing assistant tag are consistent with Qwen’s chat template. Nice catch.


68-68: Confirm image injection path matches tokenized prompt.

Using <|vision_start|><|image_pad|><|vision_end|> assumes the runtime consumes text tokens for images; some vLLM paths expect images via a separate multimodal payload. Verify your Processor actually inserts/attaches the image(s) accordingly.

If multiple images are possible, ensure you repeat <|image_pad|> per image or programmatically expand the template.

Copy link
Contributor

@indrajit96 indrajit96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Side question:
Is there a way we can verify such deployments locally using minkube or something?
And do we have any documented steps for the same if above is true?

@GuanLuo
Copy link
Contributor Author

GuanLuo commented Sep 5, 2025

LGTM! Side question: Is there a way we can verify such deployments locally using minkube or something? And do we have any documented steps for the same if above is true?

Probably question for @biswapanda or @atchernych

@GuanLuo GuanLuo merged commit 9ef1328 into main Sep 5, 2025
11 checks passed
@GuanLuo GuanLuo deleted the GuanLuo-patch-2 branch September 5, 2025 20:05
GuanLuo added a commit that referenced this pull request Sep 5, 2025
saturley-hall pushed a commit that referenced this pull request Sep 5, 2025
GavinZhu-GMI pushed a commit to GavinZhu-GMI/dynamo that referenced this pull request Sep 8, 2025
nnshah1 pushed a commit that referenced this pull request Sep 8, 2025
nnshah1 pushed a commit that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants