Skip to content

Conversation

@telemaco
Copy link
Collaborator

@telemaco telemaco commented Dec 9, 2025

  • Migrate to e2e pytest the test/system/050-pull.bats bat test.
  • Ollama fixture has been added to test on a insolated ollama instance
  • The tests cover a wide range of functionalities, including:
    • Pulling models from different transports like Ollama, Hugging Face, OCI, and local files.
    • Verifying the use of local caches from Ollama and Hugging Face to speed up subsequent pulls.
    • Testing error handling for non-existent models and endianness mismatches.
    • Ensuring proper interaction with authenticated OCI registries for pulling private models.
    • Validating the handling of models with multiple layers and references.

Summary by Sourcery

Add end-to-end pytest coverage for the ramalama pull command across multiple model transports and error cases.

Enhancements:

  • Introduce an isolated Ollama server fixture for e2e tests to exercise Ollama-based pulls and caching behavior.
  • Add generic pytest skip markers for Ollama presence and host endianness-dependent scenarios.

Build:

  • Add requests as a development dependency required by new e2e tests.

Tests:

  • Introduce comprehensive e2e pytest suite for ramalama pull, covering Ollama, Hugging Face, OCI, file transports, and model listing.
  • Add tests for error handling scenarios including missing MODEL argument, non-existent models, and endian mismatch verification failures.
  • Add tests validating multi-layer model downloads, multiple tag references, authenticated pulls from a test container registry, and reuse of Ollama and Hugging Face caches to speed up pulls.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @telemaco, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the testing suite by migrating the pull command's system tests from bats to a new pytest based end-to-end framework. It introduces a dedicated fixture for isolated Ollama server testing and expands test coverage to include diverse model sources, caching behaviors, error conditions, and OCI registry interactions, ensuring the pull command's robustness and reliability.

Highlights

  • Test Migration: Migrated the pull command's system test from bats to a new pytest based end-to-end framework, enhancing test maintainability and capabilities.
  • Ollama Test Fixture: Introduced an isolated Ollama server fixture, allowing for reliable and independent testing of Ollama-related pull functionalities.
  • Comprehensive Pull Tests: Added extensive end-to-end tests for the pull command, covering various model transports including Ollama, Hugging Face, OCI registries, and local files.
  • Caching Verification: Implemented tests to verify the effective use of local caches for both Ollama and Hugging Face models, ensuring faster subsequent pulls.
  • Robust Error Handling: Included tests for error conditions such as pulling non-existent models and handling endianness mismatches, improving the command's resilience.
  • OCI Registry and Multi-Layer Models: Validated proper interaction with authenticated OCI registries for private models and ensured correct handling of models with multiple layers and references.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Dec 9, 2025

Reviewer's Guide

Adds a new end-to-end pytest suite for the ramalama pull command, introducing an isolated Ollama server fixture, new skip markers, a requests dependency, and comprehensive tests that cover transports, caching, error handling, and authenticated registry interactions.

File-Level Changes

Change Details Files
Introduce an OllamaServer helper and pytest fixture to run tests against an isolated Ollama instance.
  • Add OllamaServer dataclass that manages lifecycle of a local ollama serve process with configurable host, models directory, and startup timeout
  • Implement health-check logic in OllamaServer.wait() that polls the Ollama HTTP endpoint and asserts the server is running
  • Provide OllamaServer.pull_model() helper to pre-populate the Ollama cache via ollama pull
  • Expose an ollama_server function-scoped pytest fixture that starts Ollama on a random high port with a temporary models directory and yields the server to tests
test/e2e/conftest.py
Add shared pytest skip markers for Ollama presence and machine endianness.
  • Define skip_if_no_ollama to conditionally skip tests when the ollama binary is not installed
  • Define skip_if_big_endian_machine and skip_if_little_endian_machine markers based on sys.byteorder
test/conftest.py
Declare requests as a development dependency for e2e tests that call HTTP endpoints.
  • Extend the dev extra in pyproject.toml to include the requests package
pyproject.toml
Create a comprehensive e2e pytest suite for ramalama pull covering transports, caching, multi-layer models, error paths, and registry integration.
  • Add tests for missing MODEL argument and non-existent models to validate CLI error codes and messages
  • Parametrize tests for pulling models via Ollama, Hugging Face, OCI, and file transports, including RAMALAMA_TRANSPORT-based resolution and list output validation
  • Add coverage for multi-layer Hugging Face models and multiple tag references, including behavior of ramalama rm with shared snapshots
  • Add endianness validation tests that assert correct error messages when host and model endianness mismatch, guarded by endian-specific skip markers
  • Implement caching tests that compare initial pull times vs. cached pulls using Ollama and Hugging Face caches
  • Add authenticated container registry test that logs in, pushes a fake model, pulls it back via OCI, verifies listing, and cleans up the remote image
test/e2e/test_pull.py

Possibly linked issues

  • #unknown: The PR implements the pytest migration for test/system/050-pull.bats, directly contributing to the e2e migration issue.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a great addition, migrating the pull command tests to a comprehensive e2e pytest suite. The new OllamaServer fixture for isolated testing is a solid improvement. The tests cover a wide range of functionalities, from different transports to cache utilization and error handling.

My review includes suggestions to enhance the test code's quality and robustness. I've pointed out minor issues like unnecessary print statements and comment typos. More significantly, I've recommended changes to make a test container-engine-agnostic by removing a hardcoded command, which will improve test coverage. I've also highlighted a FIXME comment that needs attention.

assert fake_model_registry_url in [model["name"] for model in model_list]

# Clean fake image
ctx.check_call(["podman", "rmi", fake_model_registry_url.replace("oci://", "")])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The podman command is hardcoded here, which is why the test is decorated with @skip_if_docker. To make this test work with both podman and docker, you should:

  1. Remove @skip_if_docker from line 312.
  2. Add the container_engine fixture to the test signature on line 313: def test_pull_with_registry(container_registry, container_engine):
  3. Use the container_engine variable here.
Suggested change
ctx.check_call(["podman", "rmi", fake_model_registry_url.replace("oci://", "")])
ctx.check_call([container_engine, "rmi", fake_model_registry_url.replace("oci://", "")])

Comment on lines 123 to 124
print(self.url)
print(requests.get(self.url, timeout=0.5))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

These print statements should be removed from the test helper as they add noise to test execution logs, making debugging more difficult. The requests.get call is sufficient on its own to check for server availability.

Suggested change
print(self.url)
print(requests.get(self.url, timeout=0.5))
requests.get(self.url, timeout=0.5)

with RamalamaExecWorkspace(env_vars=env_vars) as ctx:
ramalama_cli = ["ramalama", "--store", ctx.storage_dir]

# Ensure huggingface cache exists and is set as environment variable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment appears to have a typo. It should likely refer to ollama cache instead of huggingface cache.

Suggested change
# Ensure huggingface cache exists and is set as environment variable
# Ensure ollama cache exists and is set as environment variable

ctx.environ["OLLAMA_HOST"] = ollama_server.url
ctx.environ["OLLAMA_MODELS"] = ollama_server.models_dir.as_posix()

# Pull image using huggingface cli
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment seems to have a typo. It should refer to the ollama CLI, not the huggingface CLI.

Suggested change
# Pull image using huggingface cli
# Pull image using ollama cli

@skip_if_no_container
@skip_if_docker
def test_pull_with_registry(container_registry):
# FIXME: Check with the maintainers if some of the original tests with registry are necessary
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This FIXME comment should be addressed. Please either remove it if it's no longer relevant, or create an issue to track this work and reference the issue number in the comment.

@telemaco telemaco force-pushed the e2e-pytest-pull-cmd branch from 04966f3 to 9dbc13a Compare December 9, 2025 13:53
- Migrate to e2e pytest the `test/system/050-pull.bats` bat test.
- Ollama fixture has been added to test on a insolated ollama instance
- The tests cover a wide range of functionalities, including:
  - Pulling models from different transports like Ollama, Hugging Face, OCI, and local files.
  - Verifying the use of local caches from Ollama and Hugging Face to speed up subsequent pulls.
  - Testing error handling for non-existent models and endianness mismatches.
  - Ensuring proper interaction with authenticated OCI registries for pulling private models.
  - Validating the handling of models with multiple layers and references.

Signed-off-by: Roberto Majadas <[email protected]>
@telemaco telemaco force-pushed the e2e-pytest-pull-cmd branch from 9dbc13a to 96395fa Compare December 12, 2025 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant