Skip to content

Conversation

@Vasilije1990
Copy link
Contributor

@Vasilije1990 Vasilije1990 commented Feb 24, 2025

Description

DCO Affirmation

I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin

Summary by CodeRabbit

  • New Features

    • Integrated audio transcription and image processing capabilities to enhance media input handling.
    • Expanded containerization and orchestration support for improved deployment flexibility.
  • Chores

    • Refined the CI workflow with clearer logging and enhanced environment configuration.
    • Introduced updated deployment configurations to streamline backend and database service setups.
  • Documentation

    • Added detailed deployment instructions for seamless Kubernetes-based installations.

@gitguardian
Copy link

gitguardian bot commented Feb 24, 2025

⚠️ GitGuardian has uncovered 1 secret following the scan of your pull request.

Please consider investigating the findings and remediating the incidents. Failure to do so may lead to compromising the associated services or software components.

🔎 Detected hardcoded secret in your pull request
GitGuardian id GitGuardian status Secret Commit Filename
9573981 Triggered Generic Password 62d2d76 infra/docker-compose-helm.yml View secret
🛠 Guidelines to remediate hardcoded secrets
  1. Understand the implications of revoking this secret by investigating where it is used in your code.
  2. Replace and store your secret safely. Learn here the best practices.
  3. Revoke and rotate this secret.
  4. If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.

To avoid such incidents in the future consider


🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2025

Walkthrough

The changes update the GitHub Actions workflow for testing the Ollama container by adding readability improvements and enhanced logging, and they enrich the Ollama API adapter with support for API versioning as well as new methods for audio transcription and image analysis. In addition, a collection of infrastructure files—including Helm charts, Docker configurations, Kubernetes manifests, and a new .gitignore—has been added to set up the deployment environment for both the Cognee application and its PostgreSQL backend.

Changes

File(s) Change Summary
.github/workflows/test_ollama.yml Added blank lines for readability, enhanced logging for service readiness with attempt counts, updated environment variables, and adjusted dependency installation (including torch).
.../cognee/infrastructure/llm/ollama/adapter.py Updated the OllamaAPIAdapter: added a new optional api_version parameter and a constant MAX_RETRIES, updated the class docstring, and introduced new methods create_transcript and transcribe_image.
infra/{.gitignore, Chart.yaml, Dockerfile, README.md, docker-compose-helm.yml, templates/*, values.yaml} Added multiple new infrastructure files for Helm chart configuration, Docker setup, Kubernetes deployments/services/PVCs for both the Cognee app and PostgreSQL, and a comprehensive .gitignore to manage development artifacts.

Sequence Diagram(s)

sequenceDiagram
    participant U as User
    participant A as OllamaAPIAdapter
    participant FS as File System
    participant C as ACLient

    U->>A: call create_transcript(input_file)
    A->>FS: Verify input_file exists?
    alt File exists
        A->>C: Call transcriptions.create(input_file, model, ...)
        C-->>A: Return transcript
        A-->>U: Return transcript
    else File missing
        A-->>U: Raise FileNotFoundError
    end
Loading
sequenceDiagram
    participant U as User
    participant A as OllamaAPIAdapter
    participant FS as File System
    participant C as ACLient

    U->>A: call transcribe_image(input_file)
    A->>FS: Read and encode image as base64
    A->>C: Call chat.completions.create(message, encoded_image, api_version, MAX_RETRIES)
    C-->>A: Return image transcription result
    A-->>U: Return result
Loading

Possibly related PRs

  • feat: Draft ollama test #566: Involves similar updates to the workflow file, particularly enhancements to job configurations and logging when testing the Ollama service.

Suggested labels

run-checks

Suggested reviewers

  • soobrosa

Poem

I’m hopping through the code with glee,
New lines and logs as crisp as can be.
Transcripts and images in a neat new dance,
Infra files setting up our launch at a glance.
I nibble on bugs and code so bright—
A bunny’s cheer for changes done right! 🐇


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0516d6e and bacfb01.

📒 Files selected for processing (2)
  • .github/workflows/test_ollama.yml (7 hunks)
  • cognee/infrastructure/llm/ollama/adapter.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/test_ollama.yml
  • cognee/infrastructure/llm/ollama/adapter.py
⏰ Context from checks skipped due to timeout of 90000ms (32)
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: Test on macos-13
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-15
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: lint (ubuntu-latest, 3.11.x)
  • GitHub Check: windows-latest
  • GitHub Check: test
  • GitHub Check: lint (ubuntu-latest, 3.10.x)
  • GitHub Check: Build Cognee Backend Docker App Image
  • GitHub Check: run_simple_example_test
  • GitHub Check: docker-compose-test

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (10)
cognee/infrastructure/llm/ollama/adapter.py (1)

61-63: Remove commented code.

Remove the commented code block as it's not being used.

.github/workflows/test_ollama.yml (2)

47-48: Typo in API Call Prompt:
There is a typo in the prompt text—"asnwer" appears instead of "answer". Correcting this will improve clarity.

-          curl -d '{"model": "llama3.2", "stream": false, "prompt":"Whatever I say, asnwer with Yes"}' http://localhost:11434/api/generate
+          curl -d '{"model": "llama3.2", "stream": false, "prompt":"Whatever I say, answer with Yes"}' http://localhost:11434/api/generate

84-84: YAML Formatting:
A newline character is missing at the end of the file. Please add a newline at the end to comply with YAML linting guidelines.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 84-84: no new line character at the end of file

(new-line-at-end-of-file)

.github/workflows/upgrade_deps.yml (1)

3-28: Enhanced Trigger Configuration:
The addition of new triggers for push and pull_request events—specifically monitoring changes to poetry.lock and pyproject.toml on both the main and dev branches—as well as the optional debug_enabled input for workflow_dispatch, significantly improves the automation and flexibility of dependency updates. This configuration looks well thought-out.

infra/README.md (2)

14-14: Improve clarity in repository cloning instructions.

The sentence on line 14 reads, “Clone the Repository Clone this repository to your local machine and navigate to the directory.” Consider rephrasing to remove the duplication (e.g., “Clone the repository to your local machine and navigate into its directory.”).


16-16: Remove trailing punctuation from heading.

The heading “## Deploy Helm Chart:” on line 16 ends with a colon. To comply with markdownlint’s MD026 rule, please remove the trailing punctuation.
Suggested change:

-## Deploy Helm Chart:
+## Deploy Helm Chart
🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

16-16: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

infra/templates/postgres_service.yaml (1)

14-14: Remove unnecessary trailing blank line.

A trailing blank line at the end (line 14) triggers a YAMLlint warning. Please remove it to enhance file consistency.

🧰 Tools
🪛 YAMLlint (1.35.1)

[warning] 14-14: too many blank lines

(1 > 0) (empty-lines)

infra/templates/cognee_deployment.yaml (1)

4-4: Consider Quoting Template Expressions for YAML Linting
The Helm templating expressions (e.g. {{ .Release.Name }}) can sometimes trigger YAML lint errors (as seen in the static analysis hint). You might consider wrapping these expressions in quotes (e.g. "{{ .Release.Name }}-cognee") if YAMLlint issues become problematic.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

infra/docker-compose-helm.yml (1)

3-4: Nitpick: Extra Space in Image Declaration
The space between image and the colon on line 3 (i.e. "image : cognee-backend:latest") is non-standard. Removing the extra space can improve stylistic consistency with typical YAML formatting.

infra/templates/postgres_deployment.yaml (1)

4-4: Tip: Quote Helm Template Expressions if Necessary
Similar to the cognee_deployment.yaml, quoting expressions like {{ .Release.Name }} (e.g. "{{ .Release.Name }}-postgres") can help avoid YAML lint errors, as noted by the static analysis tool.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 452eaf0 and 57630ee.

📒 Files selected for processing (15)
  • .github/workflows/test_gemini.yml (1 hunks)
  • .github/workflows/test_ollama.yml (1 hunks)
  • .github/workflows/upgrade_deps.yml (1 hunks)
  • cognee/infrastructure/llm/ollama/adapter.py (2 hunks)
  • infra/.gitignore (1 hunks)
  • infra/Chart.yaml (1 hunks)
  • infra/Dockerfile (1 hunks)
  • infra/README.md (1 hunks)
  • infra/docker-compose-helm.yml (1 hunks)
  • infra/templates/cognee_deployment.yaml (1 hunks)
  • infra/templates/cognee_service.yaml (1 hunks)
  • infra/templates/postgres_deployment.yaml (1 hunks)
  • infra/templates/postgres_pvc.yaml (1 hunks)
  • infra/templates/postgres_service.yaml (1 hunks)
  • infra/values.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • infra/.gitignore
  • infra/Chart.yaml
🧰 Additional context used
🪛 YAMLlint (1.35.1)
infra/templates/postgres_service.yaml

[warning] 14-14: too many blank lines

(1 > 0) (empty-lines)


[error] 4-4: syntax error: expected , but found ''

(syntax)

infra/templates/postgres_pvc.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

infra/templates/postgres_deployment.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

.github/workflows/test_ollama.yml

[error] 84-84: no new line character at the end of file

(new-line-at-end-of-file)

infra/templates/cognee_deployment.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

infra/templates/cognee_service.yaml

[error] 4-4: syntax error: expected , but found ''

(syntax)

🪛 actionlint (1.7.4)
.github/workflows/test_gemini.yml

22-22: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


23-23: secret "EMBEDDING_API_KEY" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


24-24: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


25-25: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


26-26: secret "EMBEDDING_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


27-27: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


28-28: secret "EMBEDDING_MAX_TOKENS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


29-29: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


31-31: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


32-32: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


33-33: secret "LLM_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)

🪛 Ruff (0.8.2)
cognee/infrastructure/llm/ollama/adapter.py

19-19: Redefinition of unused api_version from line 1

(F811)

🪛 GitHub Actions: lint | ruff format
cognee/infrastructure/llm/ollama/adapter.py

[error] 1-1: Ruff formatting check failed. Would reformat: cognee/infrastructure/llm/ollama/adapter.py. Run 'ruff format' to fix code style issues.

🪛 GitHub Actions: lint | ruff lint
cognee/infrastructure/llm/ollama/adapter.py

[error] 19-19: Ruff: F811 Redefinition of unused api_version from line 1. Remove definition: api_version.

🪛 markdownlint-cli2 (0.17.2)
infra/README.md

16-16: Trailing punctuation in heading
Punctuation: ':'

(MD026, no-trailing-punctuation)

⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Test on macos-13
  • GitHub Check: windows-latest
  • GitHub Check: Test on ubuntu-22.04
🔇 Additional comments (9)
cognee/infrastructure/llm/ollama/adapter.py (3)

13-17: LGTM! Good class structure improvements.

The updated docstring and new class attributes improve code organization and maintainability.


19-26: LGTM! Constructor changes look good.

The addition of the api_version parameter and its assignment is clean and follows good practices.

🧰 Tools
🪛 Ruff (0.8.2)

19-19: Redefinition of unused api_version from line 1

(F811)

🪛 GitHub Actions: lint | ruff lint

[error] 19-19: Ruff: F811 Redefinition of unused api_version from line 1. Remove definition: api_version.


75-103: LGTM! Well-implemented image transcription.

The implementation follows best practices with proper error handling and consistent configuration usage.

.github/workflows/upgrade_deps.yml (1)

29-61: Job Steps and PR Creation:
The job steps (checkout, setting up Python, installing Poetry, updating dependencies, and creating the pull request via peter-evans/create-pull-request) are implemented in a clear and sequential manner. Ensure that the branch names and tokens used here align with your overall release process.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 61-61: no new line character at the end of file

(new-line-at-end-of-file)

infra/values.yaml (1)

1-23: Configuration file approved.

The infra/values.yaml file is well-structured, and the configurations for both the Cognee application and PostgreSQL services are clearly defined.

infra/templates/cognee_deployment.yaml (1)

1-33: Overall Deployment Configuration Looks Good
The Kubernetes deployment is clearly defined with Helm templating. The structure—with metadata, spec, and container configuration—is well laid out and aligns with expected best practices for a Kubernetes deployment.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

infra/docker-compose-helm.yml (1)

1-47: Docker Compose Configuration Looks Cohesive
The file effectively defines the cognee and postgres services along with the associated networks and volumes. It appears well integrated with the Helm chart and Kubernetes resources, ensuring a seamless transition between container orchestration and development.

infra/Dockerfile (1)

1-46: Dockerfile Overall Structure is Sound
Aside from the package installation command noted above, the Dockerfile efficiently sets up the Python environment, configures Poetry, and copies the necessary artifacts including the application code and entrypoint script.

infra/templates/postgres_deployment.yaml (1)

1-36: Postgres Deployment Configuration is Structurally Correct
The PostgreSQL deployment is well defined—with dynamic naming, environment variables, port configuration, and volume mounts—all managed via Helm templating. This ensures persistent storage and proper configuration for the database container.

🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

Comment on lines 54 to 74

def create_transcript(self, input):
"""Generate a audio transcript from a user query."""

if not os.path.isfile(input):
raise FileNotFoundError(f"The file {input} does not exist.")

# with open(input, 'rb') as audio_file:
# audio_data = audio_file.read()

transcription = self.aclient.transcription(
model=self.transcription_model,
file=Path(input),
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_retries=self.MAX_RETRIES,
)

return transcription

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Initialize missing transcription_model attribute.

The transcription_model attribute used in create_transcript is not initialized in the constructor.

Add the following to the constructor:

 def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None:
     self.name = name
     self.model = model
     self.api_key = api_key
     self.endpoint = endpoint
     self.max_tokens = max_tokens
     self.api_version = api_version
+    self.transcription_model = model  # or add a new parameter if a different model is needed for transcription
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def create_transcript(self, input):
"""Generate a audio transcript from a user query."""
if not os.path.isfile(input):
raise FileNotFoundError(f"The file {input} does not exist.")
# with open(input, 'rb') as audio_file:
# audio_data = audio_file.read()
transcription = self.aclient.transcription(
model=self.transcription_model,
file=Path(input),
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_retries=self.MAX_RETRIES,
)
return transcription
def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None:
self.name = name
self.model = model
self.api_key = api_key
self.endpoint = endpoint
self.max_tokens = max_tokens
self.api_version = api_version
self.transcription_model = model # or add a new parameter if a different model is needed for transcription

Comment on lines 19 to 33
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GRAPHISTRY_USERNAME: ${{ secrets.GRAPHISTRY_USERNAME }}
GRAPHISTRY_PASSWORD: ${{ secrets.GRAPHISTRY_PASSWORD }}
EMBEDDING_PROVIDER: "gemini"
EMBEDDING_API_KEY: ${{ secrets.GEMINI_API_KEY }}
EMBEDDING_MODEL: "gemini/text-embedding-004"
EMBEDDING_ENDPOINT: "https://generativelanguage.googleapis.com/v1beta/models/text-embedding-004"
EMBEDDING_API_VERSION: "v1beta"
EMBEDDING_DIMENSIONS: 768
EMBEDDING_MAX_TOKENS: 8076
LLM_PROVIDER: "gemini"
LLM_API_KEY: ${{ secrets.GEMINI_API_KEY }}
LLM_MODEL: "gemini/gemini-1.5-flash"
LLM_ENDPOINT: "https://generativelanguage.googleapis.com/"
LLM_API_VERSION: "v1beta"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Secret Mapping Mismatch in Reusable Workflow:
The secrets provided here—such as EMBEDDING_PROVIDER, EMBEDDING_API_KEY, EMBEDDING_MODEL, EMBEDDING_ENDPOINT, EMBEDDING_API_VERSION, EMBEDDING_DIMENSIONS, EMBEDDING_MAX_TOKENS, LLM_PROVIDER, LLM_MODEL, and LLM_ENDPOINT—are not defined as inputs in the referenced reusable workflow (./.github/workflows/reusable_python_example.yml), as noted by the static analysis hints. This mismatch may lead to runtime failures if these values are expected by the reusable workflow.

🧰 Tools
🪛 actionlint (1.7.4)

22-22: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


23-23: secret "EMBEDDING_API_KEY" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


24-24: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


25-25: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


26-26: secret "EMBEDDING_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


27-27: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


28-28: secret "EMBEDDING_MAX_TOKENS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


29-29: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


31-31: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


32-32: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)


33-33: secret "LLM_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"

(workflow-call)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-postgres-pvc
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Quote templated expression to resolve YAML syntax error.

The metadata name on line 4 is defined as:

name: {{ .Release.Name }}-postgres-pvc

This can trigger a YAML parsing error. Please enclose the templated expression in quotes:

-  name: {{ .Release.Name }}-postgres-pvc
+  name: "{{ .Release.Name }}-postgres-pvc"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
name: {{ .Release.Name }}-postgres-pvc
name: "{{ .Release.Name }}-postgres-pvc"
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

Comment on lines +4 to +6
name: {{ .Release.Name }}-cognee
labels:
app: {{ .Release.Name }}-cognee
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Quote templated values in metadata to prevent YAML syntax issues.

The lines defining the service name and its label are:

To avoid YAML parsing errors, please enclose these template expressions in quotes:

-  name: {{ .Release.Name }}-cognee
+  name: "{{ .Release.Name }}-cognee"
 
-    app: {{ .Release.Name }}-cognee
+    app: "{{ .Release.Name }}-cognee"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
name: {{ .Release.Name }}-cognee
labels:
app: {{ .Release.Name }}-cognee
name: "{{ .Release.Name }}-cognee"
labels:
app: "{{ .Release.Name }}-cognee"
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

Comment on lines +4 to +6
name: {{ .Release.Name }}-postgres
labels:
app: {{ .Release.Name }}-postgres
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Quote templated values in metadata to resolve YAML syntax errors.

The metadata fields on lines 4 and 6 use unquoted template expressions:

Please update them by enclosing the expressions in quotes:

-  name: {{ .Release.Name }}-postgres
+  name: "{{ .Release.Name }}-postgres"
 
-    app: {{ .Release.Name }}-postgres
+    app: "{{ .Release.Name }}-postgres"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
name: {{ .Release.Name }}-postgres
labels:
app: {{ .Release.Name }}-postgres
name: "{{ .Release.Name }}-postgres"
labels:
app: "{{ .Release.Name }}-postgres"
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 4-4: syntax error: expected , but found ''

(syntax)

Comment on lines +11 to +15
RUN apt-get update && apt-get install

RUN apt-get install -y \
gcc \
libpq-dev
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Critical: Merge Update and Install Commands
The Dockerfile contains an incomplete package installation command on line 11 (RUN apt-get update && apt-get install) followed by a full installation on line 13. This may lead to build failures. It is advisable to combine these steps to ensure a robust build process. For example:

- RUN apt-get update && apt-get install
- 
- RUN apt-get install -y \
-   gcc \
-   libpq-dev
+ RUN apt-get update && apt-get install -y gcc libpq-dev
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RUN apt-get update && apt-get install
RUN apt-get install -y \
gcc \
libpq-dev
RUN apt-get update && apt-get install -y gcc libpq-dev

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
# audio_data = audio_file.read()

transcription = self.aclient.transcription(
model=self.transcription_model,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be self.model here I think.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)

54-72: ⚠️ Potential issue

Initialize missing transcription_model attribute.

The transcription_model attribute used in create_transcript is not initialized in the constructor.

Add the following to the constructor:

 def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None:
     self.name = name
     self.model = model
     self.api_key = api_key
     self.endpoint = endpoint
     self.max_tokens = max_tokens
     self.api_version = api_version
+    self.transcription_model = model  # or add a new parameter if a different model is needed for transcription

Also, add type hints to the input parameter:

-def create_transcript(self, input):
+def create_transcript(self, input: str | Path):

Additionally, remove commented out code (lines 60-61) as it's not being used.

🧹 Nitpick comments (2)
cognee/infrastructure/llm/ollama/adapter.py (2)

14-16: Add documentation for class variables.

Add docstrings to the class variables to explain their purpose.

 api_version: str
+"""The version of the API to use for requests."""

 MAX_RETRIES = 5
+"""Maximum number of retries for API calls."""

7-9: Add import organization.

Consider organizing imports according to PEP 8 conventions (standard library, third-party, local application).

-from typing import Type
-from pydantic import BaseModel
-import instructor
-from cognee.infrastructure.llm.llm_interface import LLMInterface
-from cognee.infrastructure.llm.config import get_llm_config
-from openai import OpenAI
-import base64
-from pathlib import Path
-import os
+import base64
+import os
+from pathlib import Path
+from typing import Type
+
+from openai import OpenAI
+from pydantic import BaseModel
+import instructor
+
+from cognee.infrastructure.llm.llm_interface import LLMInterface
+from cognee.infrastructure.llm.config import get_llm_config
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 57630ee and 0516d6e.

📒 Files selected for processing (1)
  • cognee/infrastructure/llm/ollama/adapter.py (2 hunks)
🧰 Additional context used
🪛 GitHub Actions: lint | ruff format
cognee/infrastructure/llm/ollama/adapter.py

[error] 1-1: Ruff formatting check failed. 1 file would be reformatted. Please run 'ruff format' to fix code style issues in this file.

⏰ Context from checks skipped due to timeout of 90000ms (31)
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-13
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: lint (ubuntu-latest, 3.11.x)
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: lint (ubuntu-latest, 3.10.x)
  • GitHub Check: windows-latest
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: docker-compose-test
  • GitHub Check: run_simple_example_test
  • GitHub Check: Build Cognee Backend Docker App Image

self.api_key = api_key
self.endpoint = endpoint
self.max_tokens = max_tokens
self.api_version= api_version
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix formatting issue.

Add a space after the equals sign.

-        self.api_version= api_version
+        self.api_version = api_version
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
self.api_version= api_version
self.api_version = api_version

Comment on lines 74 to 102
def transcribe_image(self, input) -> BaseModel:
with open(input, "rb") as image_file:
encoded_image = base64.b64encode(image_file.read()).decode("utf-8")

return self.aclient.completion(
model=self.model,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?",
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}",
},
},
],
}
],
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_tokens=300,
max_retries=self.MAX_RETRIES,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve the transcribe_image method.

The method needs several improvements:

  1. Add type hints to the input parameter
  2. Add file existence check similar to create_transcript
  3. Use the instance's max_tokens instead of hardcoding 300
  4. Consider parameterizing the prompt text
-def transcribe_image(self, input) -> BaseModel:
+def transcribe_image(self, input: str | Path) -> BaseModel:
+    """Generate a description of an image.
+    
+    Args:
+        input: Path to the image file
+        
+    Returns:
+        BaseModel: The model's description of the image
+        
+    Raises:
+        FileNotFoundError: If the image file does not exist
+    """
+    if not os.path.isfile(input):
+        raise FileNotFoundError(f"The file {input} does not exist.")
+        
     with open(input, "rb") as image_file:
         encoded_image = base64.b64encode(image_file.read()).decode("utf-8")

     return self.aclient.completion(
         model=self.model,
         messages=[
             {
                 "role": "user",
                 "content": [
                     {
                         "type": "text",
-                        "text": "What's in this image?",
+                        "text": "What's in this image?",  # Consider making this a parameter
                     },
                     {
                         "type": "image_url",
                         "image_url": {
                             "url": f"data:image/jpeg;base64,{encoded_image}",
                         },
                     },
                 ],
             }
         ],
         api_key=self.api_key,
         api_base=self.endpoint,
         api_version=self.api_version,
-        max_tokens=300,
+        max_tokens=self.max_tokens,
         max_retries=self.MAX_RETRIES,
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def transcribe_image(self, input) -> BaseModel:
with open(input, "rb") as image_file:
encoded_image = base64.b64encode(image_file.read()).decode("utf-8")
return self.aclient.completion(
model=self.model,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What’s in this image?",
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}",
},
},
],
}
],
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_tokens=300,
max_retries=self.MAX_RETRIES,
)
def transcribe_image(self, input: str | Path) -> BaseModel:
"""Generate a description of an image.
Args:
input: Path to the image file
Returns:
BaseModel: The model's description of the image
Raises:
FileNotFoundError: If the image file does not exist
"""
if not os.path.isfile(input):
raise FileNotFoundError(f"The file {input} does not exist.")
with open(input, "rb") as image_file:
encoded_image = base64.b64encode(image_file.read()).decode("utf-8")
return self.aclient.completion(
model=self.model,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
"text": "What's in this image?", # Consider making this a parameter
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}",
},
},
],
}
],
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
max_tokens=self.max_tokens,
max_retries=self.MAX_RETRIES,
)

@soobrosa soobrosa deleted the add_helm_to_core branch May 28, 2025 12:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants