-
Notifications
You must be signed in to change notification settings - Fork 954
feat: Draft ollama test #566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughA new GitHub Actions workflow file ( Changes
Suggested reviewers
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
.github/workflows/test_ollama.yml (4)
23-26: Upgrade Setup-Python Action VersionThe workflow uses
actions/setup-python@v4, but static analysis suggests that this runner version may be outdated. Consider upgrading to a newer version (for example,actions/setup-python@v5) to benefit from improved performance and bug fixes.🧰 Tools
🪛 actionlint (1.7.4)
24-24: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
52-52: Remove Trailing SpacesThere are trailing spaces present on this line. Cleaning up the whitespace helps maintain consistency and prevents potential YAML parsing issues.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 52-52: trailing spaces
(trailing-spaces)
59-70: Align Reusable Workflow SecretsThe reusable workflow (
./.github/workflows/reusable_python_example.yml) does not define the following secrets:
LLM_PROVIDER(line 63)LLM_ENDPOINT(line 64)LLM_MODEL(line 65)EMBEDDING_PROVIDER(line 66)EMBEDDING_MODEL(line 67)EMBEDDING_ENDPOINT(line 68)EMBEDDING_DIMENSIONS(line 69)HUGGINGFACE_TOKENIZER(line 70)Please verify if these secrets are necessary. If they are, update the reusable workflow to accept these inputs; otherwise, consider removing them from the calling workflow.
🧰 Tools
🪛 actionlint (1.7.4)
63-63: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
64-64: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
65-65: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
66-66: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
67-67: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
68-68: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
69-69: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
70-70: Enforce Newline at End of FileThe file is missing a newline at the end. Adding a newline improves compatibility with various tools and conforms to best practices.
🧰 Tools
🪛 actionlint (1.7.4)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
24-24: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
63-63: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
64-64: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
65-65: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
66-66: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
67-67: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
68-68: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
69-69: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 52-52: trailing spaces
(trailing-spaces)
[warning] 55-55: wrong indentation: expected 4 but found 6
(indentation)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (15)
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: docker-compose-test
.github/workflows/test_ollama.yml
Outdated
| run_simple_example_test: | ||
| uses: ./.github/workflows/reusable_python_example.yml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix YAML Indentation
The key at line 55 is indented with 6 spaces while the expected indentation is 4 spaces under the job definition. Adjust the indentation according to YAML standards to avoid parsing errors.
🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 55-55: wrong indentation: expected 4 but found 6
(indentation)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/test_ollama.yml (1)
55-70: 🛠️ Refactor suggestion
⚠️ Potential issueFix YAML Indentation in
run_simple_example_testJobThe reusable workflow job (
run_simple_example_test) is mis-indented. The keys under this job (such asuses:,with:, andsecrets:) are indented with 6 spaces instead of the expected 4 spaces relative to the job key. This can cause YAML parsing issues.- uses: ./.github/workflows/reusable_python_example.yml - with: - example-location: ./examples/python/simple_example.py - secrets: - OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} - GRAPHISTRY_USERNAME: ${{ secrets.GRAPHISTRY_USERNAME }} - GRAPHISTRY_PASSWORD: ${{ secrets.GRAPHISTRY_PASSWORD }} - LLM_API_KEY: "ollama" - LLM_PROVIDER: "ollama" - LLM_ENDPOINT: "http://localhost:11434" - LLM_MODEL: "ollama/llama3.2" - EMBEDDING_PROVIDER: "ollama" - EMBEDDING_MODEL: "avr/sfr-embedding-mistral:latest" - EMBEDDING_ENDPOINT: "http://localhost:11434/api/embeddings" - EMBEDDING_DIMENSIONS: 4096 - HUGGINGFACE_TOKENIZER: "Salesforce/SFR-Embedding-Mistral" + uses: ./.github/workflows/reusable_python_example.yml + with: + example-location: ./examples/python/simple_example.py + secrets: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} + GRAPHISTRY_USERNAME: ${{ secrets.GRAPHISTRY_USERNAME }} + GRAPHISTRY_PASSWORD: ${{ secrets.GRAPHISTRY_PASSWORD }} + LLM_API_KEY: "ollama" + LLM_PROVIDER: "ollama" + LLM_ENDPOINT: "http://localhost:11434" + LLM_MODEL: "ollama/llama3.2" + EMBEDDING_PROVIDER: "ollama" + EMBEDDING_MODEL: "avr/sfr-embedding-mistral:latest" + EMBEDDING_ENDPOINT: "http://localhost:11434/api/embeddings" + EMBEDDING_DIMENSIONS: 4096 + HUGGINGFACE_TOKENIZER: "Salesforce/SFR-Embedding-Mistral"🧰 Tools
🪛 actionlint (1.7.4)
63-63: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
64-64: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
65-65: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
66-66: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
67-67: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
68-68: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
69-69: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
[warning] 55-55: wrong indentation: expected 4 but found 6
(indentation)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
🧹 Nitpick comments (3)
.github/workflows/test_ollama.yml (3)
23-27: Update Setup-Python Action VersionThe workflow currently uses
actions/setup-python@v4. Static analysis indicates that this version is outdated. It is recommended to update to a newer version (for example,actions/setup-python@v5) for improved stability and compatibility.- uses: actions/setup-python@v4 + uses: actions/setup-python@v5🧰 Tools
🪛 actionlint (1.7.4)
24-24: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
50-52: Remove Trailing Spaces in the Embedding Model StepLine 52 contains trailing spaces. Removing these extraneous spaces will improve YAML formatting and avoid linting errors.
- curl -X POST http://localhost:11434/api/pull -d '{"name": "avr/sfr-embedding-mistral:latest"}' - + curl -X POST http://localhost:11434/api/pull -d '{"name": "avr/sfr-embedding-mistral:latest"}'🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 52-52: trailing spaces
(trailing-spaces)
70-70: Ensure Newline at End of FileThere is no newline at the end of the file, which triggers YAML lint errors. Adding a newline at the end of the file will resolve this issue.
🧰 Tools
🪛 actionlint (1.7.4)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
24-24: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
63-63: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
64-64: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
65-65: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
66-66: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
67-67: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
68-68: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
69-69: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 52-52: trailing spaces
(trailing-spaces)
[warning] 55-55: wrong indentation: expected 4 but found 6
(indentation)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (21)
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: docker-compose-test
🔇 Additional comments (4)
.github/workflows/test_ollama.yml (4)
1-7: Workflow Metadata and Trigger Configuration ApprovedThe workflow name and event triggers are clearly defined. The manual dispatch and pull request triggers are appropriate for the intended testing.
28-35: Cache Pip Packages Step Looks GoodThe configuration for caching pip packages is set up correctly to speed up subsequent runs.
36-45: Dependency Installation and Ollama Readiness Check are CorrectBoth the dependency installation step and the loop waiting for the Ollama service to be ready are implemented properly.
46-49: Pull Model Step VerifiedThe step to pull the model via the API is clear and syntactically correct.
.github/workflows/test_ollama.yml
Outdated
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| GRAPHISTRY_USERNAME: ${{ secrets.GRAPHISTRY_USERNAME }} | ||
| GRAPHISTRY_PASSWORD: ${{ secrets.GRAPHISTRY_PASSWORD }} | ||
| LLM_API_KEY: "ollama" | ||
| LLM_PROVIDER: "ollama" | ||
| LLM_ENDPOINT: "http://localhost:11434" | ||
| LLM_MODEL: "ollama/llama3.2" | ||
| EMBEDDING_PROVIDER: "ollama" | ||
| EMBEDDING_MODEL: "avr/sfr-embedding-mistral:latest" | ||
| EMBEDDING_ENDPOINT: "http://localhost:11434/api/embeddings" | ||
| EMBEDDING_DIMENSIONS: 4096 | ||
| HUGGINGFACE_TOKENIZER: "Salesforce/SFR-Embedding-Mistral" No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify Consistency of Provided Secrets with Reusable Workflow
The secrets specified in the run_simple_example_test job (such as LLM_PROVIDER, LLM_ENDPOINT, LLM_MODEL, EMBEDDING_PROVIDER, EMBEDDING_MODEL, EMBEDDING_ENDPOINT, EMBEDDING_DIMENSIONS, and HUGGINGFACE_TOKENIZER) are not defined in the referenced reusable workflow (./.github/workflows/reusable_python_example.yml). Please confirm whether these secrets are required. If they are intended for use in the reusable workflow, update its definition accordingly; otherwise, consider removing them.
🧰 Tools
🪛 actionlint (1.7.4)
63-63: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
64-64: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
65-65: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
66-66: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
67-67: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
68-68: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
69-69: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
70-70: secret "HUGGINGFACE_TOKENIZER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 YAMLlint (1.35.1)
[error] 70-70: no new line character at the end of file
(new-line-at-end-of-file)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (6)
.github/workflows/test_ollama.yml (6)
21-24: Upgrade the Python setup action version.
The workflow currently usesactions/setup-python@v4, which is flagged as outdated. Upgrading to a newer version (e.g.,@v5) can provide improved features, bug fixes, and security improvements.- - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: "3.x" + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.x"🧰 Tools
🪛 actionlint (1.7.4)
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
37-48: Review the Ollama readiness loop.
The loop that checks if the Ollama service is ready works as expected but consider adding more specific logging or error details upon failure. In the long term, you might also evaluate using a dedicated “wait-for” action to reduce maintenance effort.
50-54: Enhance error handling for model pulling.
The twocurlcommands used to pull models assume successful execution. It would be beneficial to check the HTTP response status for each command to handle potential failures, ensuring that an error in pulling one model does not go unnoticed.
68-72: Upgrade the Python setup action in the test job.
Similar to the setup-ollama job, therun_simple_example_testjob usesactions/setup-python@v4. Upgrading to@v5is recommended for consistency and to take advantage of the latest fixes and features.- - name: Set up Python - uses: actions/setup-python@v4 - with: - python-version: "3.x" + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.x"🧰 Tools
🪛 actionlint (1.7.4)
69-69: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
78-90: Consider reducing duplication in the readiness check.
Both jobs contain an almost identical loop to wait for the Ollama service. Extracting this logic into a reusable script or action might improve maintainability and reduce code duplication over time.
92-105: Validate environment variables and file formatting.
- The environment variable block specifies several keys (e.g.,
LLM_PROVIDER,EMBEDDING_ENDPOINT, etc.). Please verify these variables are correctly defined and needed for your downstream process or reusable workflows.- YAMLlint reports that there is no newline at the end of the file (line 105). Adding a newline at the end will improve POSIX compliance and prevent potential parsing issues.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 105-105: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
69-69: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 105-105: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (16)
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-13
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-15
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: docker-compose-test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
.github/workflows/test_ollama.yml (3)
82-90: Review duplicate dependency installation steps.
In therun_simple_example_testjob, there are two separate steps for installing dependencies—one using Poetry (lines 75–81) and another using pip (lines 86–90). Please verify if both are required. If not, consider consolidating them to reduce redundancy and potential conflicts.
39-50: Consolidate repetitive Ollama readiness checks.
Both thesetup-ollamajob (lines 39–50) and therun_simple_example_testjob (lines 91–102) use an identical loop to wait for the Ollama service to be ready. It would improve maintainability to abstract this waiting logic into a reusable script or action, thereby reducing code duplication.Also applies to: 91-102
118-118: Add a newline at the end of the file.
To comply with YAML linting standards, please ensure that the file ends with a newline character.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 118-118: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 118-118: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: run_simple_example_test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
.github/workflows/test_ollama.yml (1)
106-117: Verify secret definitions for environment variables.
TheRun example teststep sets several environment variables (such asLLM_PROVIDER,LLM_ENDPOINT,LLM_MODEL,EMBEDDING_PROVIDER, and others). Please ensure that these values are either defined as repository secrets or are consistent with any referenced reusable workflows. Consider revisiting their necessity if they conflict with secret definitions in other workflow files.
.github/workflows/test_ollama.yml
Outdated
| - name: Set up Python | ||
| uses: actions/setup-python@v4 | ||
| with: | ||
| python-version: "3.x" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Update the Python setup action version.
The actions/setup-python@v4 action used in the setup-ollama job is outdated. It is advisable to update this to a newer version (for example, actions/setup-python@v5) to ensure better compatibility and take advantage of recent improvements.
- uses: actions/setup-python@v4
+ uses: actions/setup-python@v5📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python | |
| uses: actions/setup-python@v4 | |
| with: | |
| python-version: "3.x" | |
| - name: Set up Python | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.x" |
🧰 Tools
🪛 actionlint (1.7.4)
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
.github/workflows/test_ollama.yml (1)
21-24: 🛠️ Refactor suggestionUpdate Setup Python Version in setup-ollama Job.
Please updateactions/setup-python@v4toactions/setup-python@v5for consistency with the later job and to benefit from the latest improvements.- uses: actions/setup-python@v4 + uses: actions/setup-python@v5🧰 Tools
🪛 actionlint (1.7.4)
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🧹 Nitpick comments (2)
.github/workflows/test_ollama.yml (2)
82-90: Consolidate Dependency Installation Steps in run_simple_example_test.
The job currently installs dependencies in two separate steps—first via Poetry (lines 82-85) and then with pip (lines 86-90). Please verify if both are required. If not, consider consolidating or removing the redundant step(s) to improve clarity and efficiency.
124-124: Ensure a Newline at End of File.
The file appears to be missing a newline at the end. Adding a newline is a standard best practice to avoid potential issues with some tools.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 124-124: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 124-124: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: run_simple_example_test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
.github/workflows/test_ollama.yml (1)
111-123: Verify Environment Variable Consistency for the Example Test.
TheRun example teststep sets several environment variables (e.g.,LLM_PROVIDER,LLM_ENDPOINT,EMBEDDING_MODEL, etc.). Please ensure these values align with any expectations in the reusable workflow definitions or downstream scripts. Consistency here is crucial for a smooth integration and correct behavior of the test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
.github/workflows/test_ollama.yml (1)
21-24: 🛠️ Refactor suggestionConsider Updating the Python Setup Action Version
The
setup-ollamajob is still usingactions/setup-python@v4, whereas therun_simple_example_testjob usesactions/setup-python@v5. For consistency and improved compatibility, update this step to useactions/setup-python@v5.🧰 Tools
🪛 actionlint (1.7.4)
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🧹 Nitpick comments (3)
.github/workflows/test_ollama.yml (3)
52-56: Consider Adding Error Handling in the Pull Models StepWhile the model pull commands are succinct, it might be beneficial to verify the response status of the
curlcommands to log or handle failures if the pull requests are unsuccessful.
86-91: Duplicate Dependency Installation Steps DetectedThere are two "Install dependencies" steps: one using
poetry install --no-interaction --all-extrasand another that upgrades pip and installs additional packages with Poetry. Consider consolidating these to streamline the setup process. If these extra dependencies (i.e.ollama-python,requests, andtransformers) are required only for this job, ensure they are declared in the project’s dependency files or usepoetry addfor clarity.
105-109: Docker Logs Dump Step is Useful for DiagnosticsDumping Docker logs helps with troubleshooting if the service fails. Consider adding error handling if no container ID is returned to prevent the command from failing unexpectedly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 125-125: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: run_simple_example_test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_eval_framework_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: docker-compose-test
🔇 Additional comments (13)
.github/workflows/test_ollama.yml (13)
1-7: Workflow Trigger Configuration Looks GoodThe workflow trigger block correctly enables manual dispatch (via
workflow_dispatch) and pull request triggers for the specified event types.
8-16: Service Container Setup is CorrectThe
setup-ollamajob efficiently uses a service container with theollama/ollamaimage and correctly maps port 11434.
17-20: Repository Checkout Step is StandardThe checkout step uses the recommended
actions/checkout@v4action, ensuring the repository is properly checked out for the job’s subsequent steps.
25-33: Cache Pip Packages Configuration is SolidThe caching step appropriately uses the hash of the requirements file to generate a cache key, which should positively impact subsequent runs.
34-38: Install Dependencies Step is ClearThe commands upgrade pip and install
ollama-pythonandrequestsas needed. Ensure these are the only dependencies required for the initial environment setup.
39-51: Ollama Readiness Check is Well-ImplementedThe retry loop for checking Ollama’s readiness (up to 30 attempts with a 2-second delay) is robust and ensures the service is up before proceeding.
57-65: Jobrun_simple_example_testConfiguration is Well-DefinedThe job’s dependency on
setup-ollamaand its service container configuration are clearly specified, ensuring a consistent testing environment.
66-69: Repository Checkout Step is ConsistentThe checkout step in this job follows best practices and ensures the codebase is available for subsequent steps.
70-74: Updated Python Setup Step is in UseUsing
actions/setup-python@v5with Python version'3.12.x'aligns with the latest improvements and ensures compatibility.
75-81: Poetry Installation Step is Appropriately ConfiguredThe action
snok/[email protected]with virtualenv settings is a good choice to manage the Python environment consistently.
82-85: Poetry Dependency Installation is ProperExecuting
poetry install --no-interaction --all-extrashelps create a reproducible environment with all required extras.
92-104: Ollama Readiness Check in Test Job is RobustThe retry mechanism in this job mirrors the earlier implementation and reliably confirms that the Ollama service is up before executing tests.
111-125: Verify Consistency of Environment Variables and SecretsThe environment variables set for the test (mixing hardcoded values and secrets like
OPENAI_API_KEYandGRAPHISTRY_USERNAME) should be verified against the expectations of the reusable workflow (./.github/workflows/reusable_python_example.yml). Confirm whether these variables are required or if adjustments/removals are needed for consistency.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 125-125: no new line character at the end of file
(new-line-at-end-of-file)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (4)
.github/workflows/test_ollama.yml (4)
52-56: Enhance Error Handling in the Pull Models Step.
The curl commands pull the required models but do not explicitly enforce failure on error. Consider adding aset -eat the beginning of the run block so the step fails immediately if one of the curl commands fails.- run: | - curl -X POST http://localhost:11434/api/pull -d '{"name": "ollama/llama3.2"}' - curl -X POST http://localhost:11434/api/pull -d '{"name": "avr/sfr-embedding-mistral:latest"}' + run: | + set -e + curl -X POST http://localhost:11434/api/pull -d '{"name": "ollama/llama3.2"}' + curl -X POST http://localhost:11434/api/pull -d '{"name": "avr/sfr-embedding-mistral:latest"}'
86-90: Consider Consolidating Dependency Installation Steps.
There are two consecutive dependency installation steps (one using a general install and another installing specific packages). Consolidating these steps could simplify the workflow and reduce potential redundancy.
91-103: Repeated Polling Logic Could Be DRYed Up.
The "Wait for Ollama to be ready" step repeats the same polling logic as in thesetup-ollamajob. If this logic is used elsewhere, consider abstracting it into a reusable action or script to adhere to DRY principles.
125-125: Add a Newline at End of File.
YAML best practices require a trailing newline at the end of the file. Please add a newline after line 125.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 125-125: no new line character at the end of file
(new-line-at-end-of-file)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 125-125: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (28)
- GitHub Check: run_simple_example_test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (14)
.github/workflows/test_ollama.yml (14)
1-7: Workflow Metadata and Trigger Configuration are Clear.
The workflow name and the trigger events (manual dispatch and pull_request with specified types) are well defined and clear.
9-16: Ollama Service Container Configuration Looks Good.
The service container definition for Ollama (with the specified image and port mapping) is correctly configured.
17-20: Repository Checkout Step is Correct.
The checkout step uses a stable version of the checkout action and is appropriately placed.
21-24: Upgrade Python Setup Action for Consistency.
In this step the workflow usesactions/setup-python@v4, while the subsequent job uses version v5. For consistency and improved compatibility, consider updating this step to useactions/setup-python@v5.- uses: actions/setup-python@v4 + uses: actions/setup-python@v5🧰 Tools
🪛 actionlint (1.7.4)
22-22: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
26-32: Cache Pip Packages Step is Well Configured.
The caching configuration uses a hash of the requirements file to create a robust cache key.
34-38: Dependency Installation in setup-ollama is Straightforward.
The step correctly upgrades pip and installs the required packages (ollama-pythonandrequests).
39-50: Polling Loop for Ollama Readiness Works as Intended.
The loop correctly retries up to 30 times with a 2-second interval and exits appropriately when Ollama is ready or fails to start.
57-65: run_simple_example_test Job Definition is Consistent.
The job dependencies, service configuration, and runner settings are properly defined and consistent with the previous job.
67-69: Repository Checkout in run_simple_example_test is Correct.
The checkout step is correctly reused in this job.
70-74: Python Setup in run_simple_example_test is Updated.
This step correctly usesactions/setup-python@v5and specifies a recent Python version (3.12.x).
75-81: Poetry Installation is Properly Configured.
The step installing Poetry with the specified options is adequate and enables in-project virtual environments with parallel installers.
82-85: Initial Poetry Dependency Installation is Correct.
Runningpoetry install --no-interaction --all-extrasefficiently installs all dependencies defined in the project.
104-108: Docker Logs Dumping Step is Useful for Debugging.
This step runs commands to list running containers and dump the logs for the Ollama container, which is helpful for troubleshooting.
110-124: Run Example Test Step - Verify Environment Variable Consistency.
The environment variables provided for the test (such asLLM_API_KEY,LLM_PROVIDER, etc.) appear to be correctly set. Please ensure these values (and the hardcoded ones, like"ollama") match your repository’s secret definitions and expected test configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (2)
.github/workflows/test_ollama.yml (2)
55-96: Reduce duplication by using a reusable workflow.The setup steps (Python, Poetry, Ollama readiness check) are duplicated from the
setup-ollamajob. Consider extracting these common steps into a reusable workflow.
109-117: Make environment variables configurable.The environment variables for LLM and embedding configurations are hardcoded. Consider making them configurable through workflow inputs or repository secrets for better flexibility.
env: LLM_API_KEY: "ollama" - LLM_PROVIDER: "ollama" - LLM_ENDPOINT: "http://127.0.0.1:11434" - LLM_MODEL: "ollama/llama3.2" - EMBEDDING_PROVIDER: "ollama" - EMBEDDING_MODEL: "avr/sfr-embedding-mistral:latest" - EMBEDDING_ENDPOINT: "http://localhost:11434/api/embeddings" - EMBEDDING_DIMENSIONS: "4096" - HUGGINGFACE_TOKENIZER: "Salesforce/SFR-Embedding-Mistral" + LLM_PROVIDER: ${{ inputs.llm_provider || 'ollama' }} + LLM_ENDPOINT: ${{ inputs.llm_endpoint || 'http://127.0.0.1:11434' }} + LLM_MODEL: ${{ inputs.llm_model || 'ollama/llama3.2' }} + EMBEDDING_PROVIDER: ${{ inputs.embedding_provider || 'ollama' }} + EMBEDDING_MODEL: ${{ inputs.embedding_model || 'avr/sfr-embedding-mistral:latest' }} + EMBEDDING_ENDPOINT: ${{ inputs.embedding_endpoint || 'http://localhost:11434/api/embeddings' }} + EMBEDDING_DIMENSIONS: ${{ inputs.embedding_dimensions || '4096' }} + HUGGINGFACE_TOKENIZER: ${{ inputs.huggingface_tokenizer || 'Salesforce/SFR-Embedding-Mistral' }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/test_ollama.yml(1 hunks)cognee/infrastructure/llm/ollama/adapter.py(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
cognee/infrastructure/llm/ollama/adapter.py
19-19: Redefinition of unused api_version from line 1
(F811)
🪛 GitHub Actions: lint | ruff lint
cognee/infrastructure/llm/ollama/adapter.py
[error] 19-19: Ruff: Redefinition of unused api_version from line 1. Remove definition: api_version.
🪛 GitHub Actions: lint | ruff format
cognee/infrastructure/llm/ollama/adapter.py
[error] 1-1: Ruff formatting check failed. The file would be reformatted.
🪛 YAMLlint (1.35.1)
.github/workflows/test_ollama.yml
[error] 118-118: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: windows-latest
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: docker-compose-test
- GitHub Check: setup-ollama
🔇 Additional comments (3)
cognee/infrastructure/llm/ollama/adapter.py (1)
75-103: LGTM!The implementation correctly handles image encoding and follows the OpenAI API format for image analysis.
.github/workflows/test_ollama.yml (2)
118-118: Add newline at end of file.Add a newline character at the end of the file to comply with YAML formatting standards.
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 118-118: no new line character at the end of file
(new-line-at-end-of-file)
52-53:Details
❓ Verification inconclusive
Verify Ollama model names.
The model names used in the Pull Models step might be incorrect:
llama3.2doesn't seem to be a standard Ollama model name- The embedding model uses a non-standard prefix
avr/
🏁 Script executed:
#!/bin/bash # Description: Check if the models exist in Ollama's model library curl -s https://ollama.ai/library | grep -E "llama3.2|avr/sfr-embedding-mistral"Length of output: 82
Manual Verification Required: Confirm Ollama Model Names
The current Pull Models step uses the following model names:
"llama3.2"may not be a standard Ollama model name.- The embedding model
"avr/sfr-embedding-mistral:latest"uses an unusual prefix.The verification script produced no output, so it's inconclusive whether these names exist in Ollama's model library. Please manually verify these model names against Ollama's official library or documentation and update the workflow if necessary.
| @@ -1,20 +1,28 @@ | |||
| from sys import api_version | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused import.
The api_version import from sys is not used and conflicts with the class attribute. Remove this import to fix the redefinition error.
-from sys import api_version📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from sys import api_version |
🧰 Tools
🪛 GitHub Actions: lint | ruff format
[error] 1-1: Ruff formatting check failed. The file would be reformatted.
| def create_transcript(self, input): | ||
| """Generate a audio transcript from a user query.""" | ||
|
|
||
| if not os.path.isfile(input): | ||
| raise FileNotFoundError(f"The file {input} does not exist.") | ||
|
|
||
| # with open(input, 'rb') as audio_file: | ||
| # audio_data = audio_file.read() | ||
|
|
||
| transcription = self.aclient.transcription( | ||
| model=self.transcription_model, | ||
| file=Path(input), | ||
| api_key=self.api_key, | ||
| api_base=self.endpoint, | ||
| api_version=self.api_version, | ||
| max_retries=self.MAX_RETRIES, | ||
| ) | ||
|
|
||
| return transcription |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix implementation issues in create_transcript.
- The
transcription_modelattribute is not defined in the class. - There is commented-out code that should be removed or implemented.
Apply this diff to fix the issues:
def create_transcript(self, input):
"""Generate a audio transcript from a user query."""
if not os.path.isfile(input):
raise FileNotFoundError(f"The file {input} does not exist.")
- # with open(input, 'rb') as audio_file:
- # audio_data = audio_file.read()
-
transcription = self.aclient.transcription(
- model=self.transcription_model,
+ model=self.model, # Use the model defined in __init__
file=Path(input),
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_retries=self.MAX_RETRIES,
)
return transcription📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def create_transcript(self, input): | |
| """Generate a audio transcript from a user query.""" | |
| if not os.path.isfile(input): | |
| raise FileNotFoundError(f"The file {input} does not exist.") | |
| # with open(input, 'rb') as audio_file: | |
| # audio_data = audio_file.read() | |
| transcription = self.aclient.transcription( | |
| model=self.transcription_model, | |
| file=Path(input), | |
| api_key=self.api_key, | |
| api_base=self.endpoint, | |
| api_version=self.api_version, | |
| max_retries=self.MAX_RETRIES, | |
| ) | |
| return transcription | |
| def create_transcript(self, input): | |
| """Generate a audio transcript from a user query.""" | |
| if not os.path.isfile(input): | |
| raise FileNotFoundError(f"The file {input} does not exist.") | |
| transcription = self.aclient.transcription( | |
| model=self.model, # Use the model defined in __init__ | |
| file=Path(input), | |
| api_key=self.api_key, | |
| api_base=self.endpoint, | |
| api_version=self.api_version, | |
| max_retries=self.MAX_RETRIES, | |
| ) | |
| return transcription |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)
79-109:⚠️ Potential issueUpdate Image Handling to Meet Ollama API Requirements
The current implementation uses OpenAI's multimodal message format, but Ollama requires a different format for image processing.
According to verification, Ollama expects:
- A raw base64 string without the
data:image/jpeg;base64,prefix- Images to be passed via an
imagesparameter in the chat messageApply this diff to fix the implementation:
with open(input_file, "rb") as image_file: encoded_image = base64.b64encode(image_file.read()).decode("utf-8") response = self.aclient.chat.completions.create( model=self.model, messages=[ { "role": "user", - "content": [ - {"type": "text", "text": "What's in this image?"}, - { - "type": "image_url", - "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"}, - }, - ], + "content": "What's in this image?", + "images": [encoded_image], } ], - max_tokens=300, + max_tokens=self.max_tokens, + max_retries=self.MAX_RETRIES, )
🧹 Nitpick comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)
67-71: Make transcription model configurableInstead of hardcoding the "whisper-1" model, make it a configurable parameter either in the constructor or as a method parameter.
-transcription = self.aclient.audio.transcriptions.create( - model="whisper-1", # Ensure the correct model for transcription - file=audio_file, - language="en", -) +transcription = self.aclient.audio.transcriptions.create( + model=self.transcription_model if hasattr(self, "transcription_model") else "whisper-1", + file=audio_file, + language="en", + max_retries=self.MAX_RETRIES, +)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/workflows/test_ollama.yml(1 hunks)cognee/infrastructure/llm/ollama/adapter.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/test_ollama.yml
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_simple_example_test
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (6)
cognee/infrastructure/llm/ollama/adapter.py (6)
5-7: Good addition of required importsThe imports for AsyncOpenAI, base64, and os are appropriate for the new functionality being added.
15-29: Added API version parameter correctlyGood addition of the optional api_version parameter. This improves the flexibility of the adapter.
31-34: Properly updated to AsyncOpenAIThe switch to AsyncOpenAI is correctly implemented, which is essential for async compatibility.
41-58: Improved error handling for structured outputThe added error handling for the acreate_structured_output method is a significant improvement, ensuring callable methods and validating response formats.
60-77: Verify Ollama compatibility with OpenAI's transcription APIThe implementation assumes Ollama supports OpenAI's audio transcription API and the "whisper-1" model, which might not be true.
#!/bin/bash # Check if Ollama supports audio transcription API curl -s "https://raw.githubusercontent.com/ollama/ollama/main/docs/api.md" | grep -i "audio" || echo "No audio transcription API found in Ollama documentation" # Check if Ollama supports the whisper-1 model curl -s "https://raw.githubusercontent.com/ollama/ollama/main/docs/modelfile.md" | grep -i "whisper" || echo "No whisper model found in Ollama documentation"
45-52: Use api_version parameter in API callsThe api_version parameter is stored in the instance but not utilized in API calls.
response = await self.aclient.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": text_input}, ], max_tokens=self.max_tokens, + api_version=self.api_version, )
| max_tokens=300, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use instance variable for max_tokens
Replace the hardcoded max_tokens value with the instance variable for consistency.
- max_tokens=300,
+ max_tokens=self.max_tokens,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| max_tokens=300, | |
| ) | |
| max_tokens=self.max_tokens, | |
| ) |
|
|
||
| def __init__( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Utilize MAX_RETRIES constant in API calls
You've defined a MAX_RETRIES constant, but it's not being used in any of the API calls. To make your code more resilient, add this parameter to all API calls.
transcription = self.aclient.audio.transcriptions.create(
model="whisper-1", # Ensure the correct model for transcription
file=audio_file,
language="en",
+ max_retries=self.MAX_RETRIES,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| MAX_RETRIES = 5 | |
| transcription = self.aclient.audio.transcriptions.create( | |
| model="whisper-1", # Ensure the correct model for transcription | |
| file=audio_file, | |
| language="en", | |
| max_retries=self.MAX_RETRIES, | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)
78-78: 🛠️ Refactor suggestionUse instance variable for max_tokens
Replace the hardcoded max_tokens value with the instance variable for consistency.
- max_tokens=300, + max_tokens=self.max_tokens,
🧹 Nitpick comments (2)
cognee/infrastructure/llm/ollama/adapter.py (2)
43-51: Add max_retries parameter to API call.The MAX_RETRIES constant is defined but not used in the API calls. Add it to make the code more resilient.
response = await self.client.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": text_input}, ], max_tokens=self.max_tokens, response_model=response_model, # This works after `instructor.patch(self.client)` + max_retries=self.MAX_RETRIES, )
55-86: Implement async version of transcribe_image method.The class is now using AsyncOpenAI client, but the new transcribe_image method is synchronous. Consider adding an async version for consistency.
async def atranscribe_image(self, input_file: str) -> str: """Transcribe content from an image using base64 encoding (async version).""" if not os.path.isfile(input_file): raise FileNotFoundError(f"The file {input_file} does not exist.") with open(input_file, "rb") as image_file: encoded_image = base64.b64encode(image_file.read()).decode("utf-8") try: response = await self.client.chat.completions.create( model=self.model, messages=[ { "role": "user", "content": "What's in this image?", "images": [encoded_image], } ], max_tokens=self.max_tokens, max_retries=self.MAX_RETRIES, ) # Ensure response is valid before accessing .choices[0].message.content if not hasattr(response, "choices") or not response.choices: raise ValueError("Image transcription failed. No response received.") return response.choices[0].message.content except Exception as e: raise Exception(f"Error transcribing image: {str(e)}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cognee/infrastructure/llm/ollama/adapter.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: test
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: windows-latest
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: run_simple_example_test
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)
64-79:Details
❌ Incorrect review comment
Include api_version in API calls if needed.
The api_version parameter is added to init and stored in the instance but not used in the API calls where it might be relevant.
If the API supports it, consider adding:
response = self.client.chat.completions.create( model=self.model, messages=[...], max_tokens=self.max_tokens, + api_version=self.api_version, )
🏁 Script executed:
#!/bin/bash # Check if OpenAI client API supports passing api_version to create method python3 -c "from openai import AsyncOpenAI; help(AsyncOpenAI.chat.completions.create)" 2>&1 | grep -i "api_version" || echo "api_version parameter not found in method signature"Length of output: 223
The API method does not support an
api_versionparameter, so no changes are needed.Based on the output from the verification script, the OpenAI client's
AsyncOpenAI.chat.completions.createmethod does not include anapi_versionparameter, which indicates that our current implementation is correct. The suggestion to addapi_versionis therefore not applicable.Likely an incorrect or invalid review comment.
| with open(input_file, "rb") as image_file: | ||
| encoded_image = base64.b64encode(image_file.read()).decode("utf-8") | ||
|
|
||
| response = self.aclient.chat.completions.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix undefined client reference.
The code uses self.aclient, but only self.client is defined in the constructor. This will cause a runtime error.
-response = self.aclient.chat.completions.create(
+response = self.client.chat.completions.create(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = self.aclient.chat.completions.create( | |
| response = self.client.chat.completions.create( |
| model=self.model, | ||
| messages=[ | ||
| { | ||
| "role": "user", | ||
| "content": f"Use the given format to extract information from the following input: {text_input}", | ||
| }, | ||
| { | ||
| "role": "system", | ||
| "content": system_prompt, | ||
| }, | ||
| "content": [ | ||
| {"type": "text", "text": "What’s in this image?"}, | ||
| { | ||
| "type": "image_url", | ||
| "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"}, | ||
| }, | ||
| ], | ||
| } | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Update image handling to meet Ollama API requirements.
The current implementation assumes Ollama supports OpenAI's multimodal message format, but according to previous verification, Ollama requires different formatting for image processing.
-response = self.client.chat.completions.create(
+response = self.client.chat.completions.create(
model=self.model,
messages=[
{
"role": "user",
- "content": [
- {"type": "text", "text": "What's in this image?"},
- {
- "type": "image_url",
- "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"},
- },
- ],
+ "content": "What's in this image?",
+ "images": [encoded_image], # Ollama expects raw base64 strings without prefix
}
],🌐 Web query:
Does Ollama API support image analysis with content structure like OpenAI or does it require different format?
💡 Result:
Ollama's API supports image analysis with structured content output capabilities similar to OpenAI, though implementation details differ. Here's a technical breakdown:
Image Analysis Capabilities
Ollama supports vision models like LLaVA (v1.6) that enable:
- Object detection
- Text recognition in images
- Detailed scene description
- Chart/diagram interpretation [9]
The API accepts images through:
- File paths in CLI (
./image.jpg) - Base64-encoded data in programmatic calls
- Direct image URLs [9]
# Python example using Ollama's vision API
import ollama
response = ollama.chat(
model='llava:13b',
messages=[{
'role': 'user',
'content': 'Analyze this security footage:',
'images': ['/path/to/image.jpg']
}]
)
print(response['message']['content'])Structured Output Formatting
Ollama introduced JSON schema constraints in December 2024:
# Structured output example with image analysis
response = ollama.generate(
model='llava:34b',
prompt="Extract vehicle details from traffic cam image",
format={
"type": "object",
"properties": {
"make": {"type": "string"},
"color": {"type": "string"},
"license_plate": {"type": "string"}
}
}
)Key differences from OpenAI:
- Requires explicit JSON schema definition
- Uses native
formatparameter instead of separate JSON mode flag - Supports vision models through multimodal LLMs [3][7]
Implementation Considerations
-
Model Requirements: Must use vision-capable models like:
llava:7b,llava:13b,llava:34bbakllavamoondream[9]
-
Resolution Handling:
- Supports up to 4MP images (4x previous versions)
- Automatic downsampling for large images [9]
-
API Compatibility:
# OpenAI-style implementation with Ollama from openai import OpenAI client = OpenAI( base_url='http://localhost:11434/v1', api_key='ollama' ) response = client.chat.completions.create( model="llava:13b", messages=[{ "role": "user", "content": [ {"type": "text", "text": "Describe this medical scan"}, {"type": "image_url", "image_url": {"url": "base64_encoded_image"}} ] }] )
This compatibility layer allows direct porting of OpenAI-based implementations [2][8].
For production systems, Ollama recommends combining vision models with structured output constraints to ensure reliable parsing of image analysis results [4][7].
Citations:
- 1: https://www.restack.io/p/ollama-answer-image-recognition-cat-ai
- 2: https://ollama.com/blog/openai-compatibility
- 3: https://ollama.com/blog/structured-outputs
- 4: https://python.useinstructor.com/examples/ollama/
- 5: https://github.com/the-smart-home-maker/hass_ollama_image_analysis
- 6: https://community.home-assistant.io/t/llm-vision-let-home-assistant-see/729241
- 7: https://github.com/ollama/ollama/blob/main/docs/api.md?plain=1
- 8: https://www.cohorte.co/blog/a-comprehensive-guide-to-ollama
- 9: https://ollama.com/blog/vision-models
- 10: https://www.restack.io/p/ollama-answer-comparison-cat-ai
Action Required: Update Image Payload Format for Ollama API Compatibility
The current implementation uses OpenAI’s multimodal message format. Based on updated Ollama API guidelines, you should remove the nested content structure and instead provide the query as a plain text string along with an “images” field containing the raw base64-encoded string (without the MIME data prefix). This will ensure proper image processing with Ollama’s vision models.
- In file cognee/infrastructure/llm/ollama/adapter.py (lines 64–76), update the message payload as shown below.
- Remove the image encapsulation within the “content” array.
- Supply the encoded image directly in the “images” list without the
"data:image/jpeg;base64,"prefix.
-response = self.client.chat.completions.create(
- model=self.model,
- messages=[
- {
- "role": "user",
- "content": [
- {"type": "text", "text": "What's in this image?"},
- {
- "type": "image_url",
- "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"},
- },
- ],
- }
- ],
+response = self.client.chat.completions.create(
+ model=self.model,
+ messages=[
+ {
+ "role": "user",
+ "content": "What's in this image?",
+ "images": [encoded_image], # Ollama expects raw base64 strings without the MIME prefix
+ }
+ ],Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Nitpick comments (2)
cognee/infrastructure/llm/ollama/adapter.py (2)
1-8: Import added but not used.The
jsonmodule is imported on line 8 but never used in this file. It's best to remove unused imports to keep the code clean.-import json
14-14: MAX_RETRIES constant is defined but not used.You've defined a MAX_RETRIES constant but don't use it in any of the API calls. To improve resiliency, add this parameter to your API requests.
For example, in your API calls:
response = await self.client.chat.completions.create( model=self.model, messages=[...], max_tokens=self.max_tokens, response_model=response_model, + max_retries=self.MAX_RETRIES, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cognee/infrastructure/llm/ollama/adapter.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (29)
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: Test on macos-13
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: run_simple_example_test
- GitHub Check: docker-compose-test
| def transcribe_image(self, input_file: str) -> str: | ||
| """Transcribe content from an image using base64 encoding.""" | ||
|
|
||
| if not os.path.isfile(input_file): | ||
| raise FileNotFoundError(f"The file {input_file} does not exist.") | ||
|
|
||
| with open(input_file, "rb") as image_file: | ||
| encoded_image = base64.b64encode(image_file.read()).decode("utf-8") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix client reference and add error handling in transcribe_image.
The transcribe_image method uses the base64 encoding correctly, but it should use self.client instead of self.aclient and include error handling.
def transcribe_image(self, input_file: str) -> str:
"""Transcribe content from an image using base64 encoding."""
if not os.path.isfile(input_file):
raise FileNotFoundError(f"The file {input_file} does not exist.")
- with open(input_file, "rb") as image_file:
- encoded_image = base64.b64encode(image_file.read()).decode("utf-8")
+ try:
+ with open(input_file, "rb") as image_file:
+ encoded_image = base64.b64encode(image_file.read()).decode("utf-8")Committable suggestion skipped: line range outside the PR's diff.
| model=self.model, | ||
| messages=[ | ||
| {"role": "system", "content": system_prompt}, | ||
| {"role": "user", "content": text_input}, | ||
| ], | ||
| max_tokens=self.max_tokens, | ||
| response_model=response_model, | ||
| ) | ||
|
|
||
| return response_model(response) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix structured output response handling.
There appears to be a logic error in handling the response. The create method is already being patched with instructor to return a structured object, so wrapping the response again in the response_model will cause an error.
- return response_model(response)
+ return response📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| response = await self.client.chat.completions.create( | |
| model=self.model, | |
| messages=[ | |
| {"role": "system", "content": system_prompt}, | |
| {"role": "user", "content": text_input}, | |
| ], | |
| max_tokens=self.max_tokens, | |
| response_model=response_model, | |
| ) | |
| return response_model(response) | |
| response = await self.client.chat.completions.create( | |
| model=self.model, | |
| messages=[ | |
| {"role": "system", "content": system_prompt}, | |
| {"role": "user", "content": text_input}, | |
| ], | |
| max_tokens=self.max_tokens, | |
| response_model=response_model, | |
| ) | |
| return response |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add proper error handling for the entire API call.
While you've added validation for the response, you should wrap the entire API call in a try-except block to handle API exceptions properly.
+ except Exception as e:
+ raise Exception(f"Error transcribing image: {str(e)}")
+
return response.choices[0].message.contentCommittable suggestion skipped: line range outside the PR's diff.
| """Generate an audio transcript from a user query.""" | ||
|
|
||
| if not os.path.isfile(input_file): | ||
| raise FileNotFoundError(f"The file {input_file} does not exist.") | ||
|
|
||
| with open(input_file, "rb") as audio_file: | ||
| transcription = self.aclient.audio.transcriptions.create( | ||
| model="whisper-1", # Ensure the correct model for transcription | ||
| file=audio_file, | ||
| language="en", | ||
| ) | ||
|
|
||
| # Ensure the response contains a valid transcript | ||
| if not hasattr(transcription, "text"): | ||
| raise ValueError("Transcription failed. No text returned.") | ||
|
|
||
| return transcription.text | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix issues in the audio transcription implementation.
This implementation has multiple critical issues:
- It uses
self.aclientinstead ofself.client - Based on previous reviews, Ollama may not support OpenAI's audio transcription API
- There's no error handling for the API call
def create_transcript(self, input_file: str) -> str:
"""Generate an audio transcript from a user query."""
if not os.path.isfile(input_file):
raise FileNotFoundError(f"The file {input_file} does not exist.")
- with open(input_file, "rb") as audio_file:
- transcription = self.aclient.audio.transcriptions.create(
- model="whisper-1", # Ensure the correct model for transcription
- file=audio_file,
- language="en",
- )
-
- # Ensure the response contains a valid transcript
- if not hasattr(transcription, "text"):
- raise ValueError("Transcription failed. No text returned.")
-
- return transcription.text
+ try:
+ with open(input_file, "rb") as audio_file:
+ transcription = self.client.audio.transcriptions.create(
+ model="whisper-1", # Ensure the correct model for transcription
+ file=audio_file,
+ language="en",
+ max_retries=self.MAX_RETRIES,
+ )
+
+ # Ensure the response contains a valid transcript
+ if not hasattr(transcription, "text"):
+ raise ValueError("Transcription failed. No text returned.")
+
+ return transcription.text
+ except Exception as e:
+ raise Exception(f"Error transcribing audio: {str(e)}")🌐 Web query:
Does Ollama support OpenAI-compatible audio transcription API?
💡 Result:
Ollama does not natively support OpenAI-compatible audio transcription APIs like Whisper. However, developers can combine Ollama with OpenAI's Whisper API or third-party tools to achieve this functionality[1][3][7]. Here's how it works:
-
Audio Processing Workflow
Typical implementations use OpenAI's Whisper for speech-to-text:// Transcribe audio using OpenAI's API const transcription = await openai.audio.transcriptions.create({ model: 'whisper-1', file: audioFile, });
Then pass the text to Ollama for LLM processing[1][4].
-
Ollama's OpenAI Compatibility
While Ollama supports OpenAI-style chat completions API for text generation[2][5][8]:// Using Ollama with OpenAI-compatible chat API const response = await generateText({ model: ollama('llama3.1'), prompt: transcribedText, });
This compatibility doesn't extend to audio endpoints[2][5].
-
Alternative Implementations
Some projects combine offline tools with Ollama:- WhisperLive for real-time transcription[7]
- Ollama-Voice stack (Whisper + Ollama + pyttsx3)[10]
- Custom integrations using Nvidia GPUs[3]
For direct audio API support, developers currently need to use OpenAI's services or third-party solutions alongside Ollama's text capabilities[1][3][7]. The native Ollama API focuses on text generation through its OpenAI-compatible chat endpoints[2][5][8].
Citations:
- 1: https://www.thisdot.co/blog/how-to-build-an-ai-assistant-with-openai-vercel-ai-sdk-and-ollama-with-next
- 2: https://ollama.com/blog/openai-compatibility
- 3: https://www.restack.io/p/ollama-whisperx-answer-cat-ai
- 4: https://dev.to/josmel/automating-it-interviews-with-ollama-and-audio-capabilities-in-python-545o
- 5: https://bestofai.com/article/openai-compatibility-ollama-blog
- 6: Realtime API like OpenAI (full fledged voice to voice integrations) ollama/ollama#7514
- 7: https://www.singleapi.net/2024/09/23/offline-whisper-audio-transcription-and-ollama-voice-assistant/
- 8: https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html
- 9: https://docs.openwebui.com/features/
- 10: https://github.com/maudoin/ollama-voice
Critical: Audio Transcription Implementation Requires Revision
The recent changes correctly update the client reference (from self.aclient to self.client) and add error handling with a retry mechanism. However, the core issue still remains: Ollama does not natively support OpenAI’s audio transcription API (e.g., the Whisper endpoint). To ensure reliable functionality, please consider one of the following improvements:
- Integrate a Supported Transcription Service: If audio transcription is required, use a dedicated service (such as OpenAI’s Whisper API or another third-party solution) rather than relying on Ollama’s endpoints.
- Refactor the Workflow: Separate the transcription step from the LLM workflow. This will help avoid confusion and prevent the use of unsupported API calls in the Ollama adapter.
Affected Code Location:
cognee/infrastructure/llm/ollama/adapter.py(lines 52-69)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def create_transcript(self, input_file: str) -> str: | |
| """Generate an audio transcript from a user query.""" | |
| if not os.path.isfile(input_file): | |
| raise FileNotFoundError(f"The file {input_file} does not exist.") | |
| with open(input_file, "rb") as audio_file: | |
| transcription = self.aclient.audio.transcriptions.create( | |
| model="whisper-1", # Ensure the correct model for transcription | |
| file=audio_file, | |
| language="en", | |
| ) | |
| # Ensure the response contains a valid transcript | |
| if not hasattr(transcription, "text"): | |
| raise ValueError("Transcription failed. No text returned.") | |
| return transcription.text | |
| def create_transcript(self, input_file: str) -> str: | |
| """Generate an audio transcript from a user query.""" | |
| if not os.path.isfile(input_file): | |
| raise FileNotFoundError(f"The file {input_file} does not exist.") | |
| try: | |
| with open(input_file, "rb") as audio_file: | |
| transcription = self.client.audio.transcriptions.create( | |
| model="whisper-1", # Ensure the correct model for transcription | |
| file=audio_file, | |
| language="en", | |
| max_retries=self.MAX_RETRIES, | |
| ) | |
| # Ensure the response contains a valid transcript | |
| if not hasattr(transcription, "text"): | |
| raise ValueError("Transcription failed. No text returned.") | |
| return transcription.text | |
| except Exception as e: | |
| raise Exception(f"Error transcribing audio: {str(e)}") |
| model=self.model, | ||
| messages=[ | ||
| { | ||
| "role": "user", | ||
| "content": f"Use the given format to extract information from the following input: {text_input}", | ||
| }, | ||
| { | ||
| "role": "system", | ||
| "content": system_prompt, | ||
| }, | ||
| "content": [ | ||
| {"type": "text", "text": "What’s in this image?"}, | ||
| { | ||
| "type": "image_url", | ||
| "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"}, | ||
| }, | ||
| ], | ||
| } | ||
| ], | ||
| max_retries=5, | ||
| response_model=response_model, | ||
| max_tokens=300, | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Fix client reference and update image payload format for Ollama compatibility.
This implementation has several issues:
- It uses
self.aclientinstead ofself.client - The format for image processing is incompatible with Ollama's API
- It uses a hardcoded
max_tokensvalue instead of the class variable - There's no error handling for the API call
-response = self.aclient.chat.completions.create(
- model=self.model,
- messages=[
- {
- "role": "user",
- "content": [
- {"type": "text", "text": "What's in this image?"},
- {
- "type": "image_url",
- "image_url": {"url": f"data:image/jpeg;base64,{encoded_image}"},
- },
- ],
- }
- ],
- max_tokens=300,
-);
+ response = self.client.chat.completions.create(
+ model=self.model,
+ messages=[
+ {
+ "role": "user",
+ "content": "What's in this image?",
+ "images": [encoded_image], # Ollama expects raw base64 strings
+ }
+ ],
+ max_tokens=self.max_tokens,
+ max_retries=self.MAX_RETRIES,
+ )🌐 Web query:
What is the correct format for sending base64 encoded images to Ollama API?
💡 Result:
To send base64-encoded images to Ollama API, include them as an array under the images parameter in your request body. Here's how to format it correctly:
Image Formatting Requirements
- Base64 encoding: Images must be converted to base64 strings without additional metadata prefixes (e.g., remove "data:image/png;base64,")
- Array structure: Wrap images in an array even for single-image requests
- Multimodal models: Use supported models like LLaVA or Llama 3.2 Vision
Example Request Structure
{
"model": "llava:13b",
"prompt": "Describe this image",
"stream": false,
"images": ["iVBORw0KGgoAAAANSUhEUgAAAG0AAABmCAY..."]
}Key parameters:
model: Must support multimodal processingprompt: Your text query/instructionimages: Array of base64-encoded image strings[1][3][4]
Implementation Examples
Python:
import base64
import ollama
with open("image.jpg", "rb") as f:
b64_image = base64.b64encode(f.read()).decode("utf-8")
response = ollama.chat(
model="llava:13b",
messages=[{
"role": "user",
"content": "What's in this image?",
"images": [b64_image]
}]
)cURL:
curl http://localhost:11434/api/generate -d '{
"model": "llama3.2-vision",
"prompt": "Explain this diagram",
"images": ["/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAYEBQY..."]
}'Supported Models & Requirements
| Model | VRAM Requirement | Base64 Handling |
|---|---|---|
| LLaVA 13B | ≥8GB | Direct string array |
| Llama3.2-Vision | 11B:8GB, 90B:64GB | File path conversion required |
For optimal results:
- Use Ollama v0.4+ for vision models[8]
- Remove metadata prefixes from base64 strings[9]
- Combine images with text prompts for contextual analysis[2][6]
This format works across all Ollama endpoints supporting multimodal input, including /api/generate and /api/chat[1][8].
Citations:
- 1: https://hexdocs.pm/ollama/0.3.0/Ollama.API.html
- 2: https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html
- 3: https://github.com/ollama/ollama/blob/main/docs/api.md?plain=1
- 4: https://benyoung.blog/blog/your-private-ai-can-have-eyes-ollama-with-the-llava-model/
- 5: https://geshan.com.np/blog/2025/02/ollama-api/
- 6: https://www.runcomfy.com/comfyui-nodes/ComfyUI-OllamaGemini/ollama-api
- 7: https://www.postman.com/postman-student-programs/ollama-api/documentation/suc47x8/ollama-rest-api
- 8: https://ollama.com/blog/llama3.2-vision
- 9: API Llava Image Path ollama/ollama#1547
- 10: https://pipedream.com/apps/hyperise/integrations/ollama/generate-chat-completion-with-ollama-api-on-new-image-impression-from-hyperise-api-int_z3sVGLQG
Action Required: Update Image Handling & Add Error Handling
The updated code correctly fixes the client reference, updates the image payload format (using the images array with a raw base64 string), and replaces the hardcoded max_tokens with the class variable while also adding max_retries. However, the API call still lacks error handling. Please ensure that:
- Encoded Image: The
encoded_imagevariable contains a raw base64 string (i.e. any "data:image/jpeg;base64," prefix is removed) as required by Ollama’s API. - Error Handling: Wrap the API call in an error handling block (e.g., try/except) to manage potential API failures gracefully.
- Consistency: Verify that all usages of the client and token/retry values across the codebase adhere to these updated patterns.
Committable suggestion skipped: line range outside the PR's diff.
|
@soobrosa What is the status of open conversations, can we resolve them? |
| with: | ||
| example-location: ./examples/python/simple_example.py | ||
| secrets: | ||
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation is not consistent.
|
@borisarzentar after consulting with @hajdul88 I evauated llama, mistral and phi4. Phi4 runs fine but it needs a bigger machine than a default GHA. Llama and mistral output is gibberish.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
.github/workflows/test_ollama.yml (5)
12-13: Runner label verification needed.
The runner label"buildjet-4vcpu-ubuntu-2204"is non-standard according to common GitHub Actions labels. Please ensure that your self-hosted runner is configured with this label or consider updating it to a supported label (e.g.,"ubuntu-22.04") if applicable.🧰 Tools
🪛 actionlint (1.7.4)
13-13: label "buildjet-4vcpu-ubuntu-2204" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
41-48: Cleanup commented legacy steps.
There is a block of commented-out steps for installing and running Ollama. If these lines are no longer necessary, consider removing them to improve file clarity and maintainability.
49-54: Container startup logic is acceptable.
The approach using adocker runcommand with a briefsleepand then executing the service withdocker execis functional. Since a subsequent readiness check is in place, this pattern should work. In scenarios where startup times vary significantly, you might explore a more dynamic health-check mechanism.
76-93: Ensure consistent JSON payload formatting.
In the "Call ollama API" step, the JSON payload for the embeddings endpoint has inconsistent indentation (for instance, the"input"key is not aligned with"model"). While this likely does not affect execution, aligning the JSON formatting will enhance readability.
117-117: Append a newline at the end of the file.
To adhere to YAML formatting standards and prevent potential linting issues, please ensure that the file ends with a newline character.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/test_ollama.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.4)
.github/workflows/test_ollama.yml
13-13: label "buildjet-4vcpu-ubuntu-2204" is unknown. available labels are "windows-latest", "windows-latest-8-cores", "windows-2022", "windows-2019", "ubuntu-latest", "ubuntu-latest-4-cores", "ubuntu-latest-8-cores", "ubuntu-latest-16-cores", "ubuntu-24.04", "ubuntu-22.04", "ubuntu-20.04", "macos-latest", "macos-latest-xl", "macos-latest-xlarge", "macos-latest-large", "macos-15-xlarge", "macos-15-large", "macos-15", "macos-14-xl", "macos-14-xlarge", "macos-14-large", "macos-14", "macos-13-xl", "macos-13-xlarge", "macos-13-large", "macos-13", "macos-12-xl", "macos-12-xlarge", "macos-12-large", "macos-12", "self-hosted", "x64", "arm", "arm64", "linux", "macos", "windows". if it is a custom label for self-hosted runner, set list of labels in actionlint.yaml config file
(runner-label)
⏰ Context from checks skipped due to timeout of 90000ms (21)
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-13
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: windows-latest
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: run_simple_example_test
- GitHub Check: docker-compose-test
🔇 Additional comments (4)
.github/workflows/test_ollama.yml (4)
58-69: Readiness check loop is robust.
The retry loop using aforloop and a conditional cURL check effectively confirms that Ollama is ready before proceeding. Adjust the retry count or sleep duration if your environment requires a longer startup period.
71-74: Model pulling commands are clear and appropriate.
The cURL commands to pull the required Ollama models are straightforward and correctly structured.
95-99: Docker logs dumping step is well implemented.
The commands to list running containers and dump logs from the Ollama service facilitate effective debugging.
101-116: Verify secrets consistency with downstream expectations.
The environment variables set when running the example test are clearly defined. Please double-check that these secrets match those expected by the./examples/python/simple_example.pyscript and any downstream reusable workflows. Consistency in secret names and values will help prevent integration issues.
Description
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin
Summary by CodeRabbit