Skip to content

Conversation

@zaristei
Copy link
Contributor

@zaristei zaristei commented Aug 7, 2025

Overview:

Fixes unpinned arm version for flashinfer in VLLM container

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Chores
    • Updated the version format for FlashInfer in the build configuration.
    • Unified the FlashInfer installation process across all architectures for consistency.

@copy-pr-bot
Copy link

copy-pr-bot bot commented Aug 7, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@github-actions
Copy link

github-actions bot commented Aug 7, 2025

👋 Hi zaristei! Thank you for contributing to ai-dynamo/dynamo.

Just a reminder: The NVIDIA Test Github Validation CI runs an essential subset of the testing framework to quickly catch errors.Your PR reviewers may elect to test the changes comprehensively before approving your changes.

🚀

@github-actions github-actions bot added the external-contribution Pull request is from an external contributor label Aug 7, 2025
@zaristei
Copy link
Contributor Author

zaristei commented Aug 7, 2025

Will try building and evaluating that flashinfer version is correct on both x86 and ARM

@zaristei zaristei changed the title Hotfix: Pin ARM version for FlashInfer Hotfix: Pin ARM version for VLLM FlashInfer Aug 7, 2025
@zaristei zaristei marked this pull request as ready for review August 8, 2025 00:35
@dmitry-tokarev-nv dmitry-tokarev-nv changed the title Hotfix: Pin ARM version for VLLM FlashInfer fix: Pin ARM version for VLLM FlashInfer Aug 8, 2025
@github-actions github-actions bot added the fix label Aug 8, 2025
@dmitry-tokarev-nv
Copy link
Contributor

/ok to test 023bdae

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 8, 2025

Walkthrough

The changes update the FlashInfer version reference in the Dockerfile and modify the installation script to always install FlashInfer from the source repository, removing the previous conditional logic for ARM64 architecture. The installation process is now consistent across all architectures.

Changes

Cohort / File(s) Change Summary
Dockerfile FlashInfer Version Update
container/Dockerfile.vllm
Changed the FLASHINF_REF argument value from v0.2.8rc1 to v0.2.8.rc1.
FlashInfer Installation Logic
container/deps/vllm/install_vllm.sh
Removed architecture-based conditional logic; now always installs FlashInfer from the repository using the specified git reference for all architectures.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

In Docker’s warren, a version hops anew,
From rc1 to dot-rc1, the number grew.
The script now builds from source each time,
No more forks for ARM—just one install line!
With every hop, our builds align,
🐇✨ Consistency shines, and all is fine!

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
container/deps/vllm/install_vllm.sh (1)

28-29: Synchronise default FlashInfer ref with Dockerfile

The default still points at v0.2.8rc1, while the Dockerfile now passes v0.2.8.rc1. Keeping these values aligned prevents accidental installs of two different revisions when the script is invoked outside the Docker build.

-FLASHINF_REF="v0.2.8rc1"
+FLASHINF_REF="v0.2.8.rc1"
♻️ Duplicate comments (1)
container/deps/vllm/install_vllm.sh (1)

170-174: Use shallow clone to speed up FlashInfer source install

Cloning the full repository (~250 MB) noticeably increases build times. A shallow fetch of the required ref is sufficient and aligns with prior feedback in PR #2020.

-cd $INSTALLATION_DIR
-git clone https://github.com/flashinfer-ai/flashinfer.git --recursive
-cd flashinfer
-git checkout $FLASHINF_REF
+cd $INSTALLATION_DIR
+git clone --depth 1 --branch "$FLASHINF_REF" \
+          https://github.com/flashinfer-ai/flashinfer.git
+cd flashinfer
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a3f7a39 and 023bdae.

📒 Files selected for processing (2)
  • container/Dockerfile.vllm (1 hunks)
  • container/deps/vllm/install_vllm.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: zaristei
PR: ai-dynamo/dynamo#2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.947Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.
Learnt from: dmitry-tokarev-nv
PR: ai-dynamo/dynamo#2300
File: pyproject.toml:64-66
Timestamp: 2025-08-05T22:51:59.230Z
Learning: The ai-dynamo/dynamo project does not ship ARM64 wheels, so platform markers to restrict dependencies to x86_64 are not needed in pyproject.toml dependencies.
📚 Learning: 2025-07-21T00:10:56.947Z
Learnt from: zaristei
PR: ai-dynamo/dynamo#2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.947Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.

Applied to files:

  • container/Dockerfile.vllm
  • container/deps/vllm/install_vllm.sh
📚 Learning: 2025-07-22T10:22:28.972Z
Learnt from: ptarasiewiczNV
PR: ai-dynamo/dynamo#2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.972Z
Learning: The `--torch-backend=auto` flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.

Applied to files:

  • container/deps/vllm/install_vllm.sh

@github-actions
Copy link

github-actions bot commented Sep 7, 2025

This PR is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Sep 7, 2025
@github-actions
Copy link

This PR has been closed due to inactivity. If you believe this PR is still relevant, please feel free to reopen it with additional context or information.

@github-actions github-actions bot closed this Sep 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

external-contribution Pull request is from an external contributor fix size/S Stale

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants