-
Notifications
You must be signed in to change notification settings - Fork 954
feat: Add helm to core #578
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 9573981 | Triggered | Generic Password | 62d2d76 | infra/docker-compose-helm.yml | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
WalkthroughThe changes update the GitHub Actions workflow for testing the Ollama container by adding readability improvements and enhanced logging, and they enrich the Ollama API adapter with support for API versioning as well as new methods for audio transcription and image analysis. In addition, a collection of infrastructure files—including Helm charts, Docker configurations, Kubernetes manifests, and a new .gitignore—has been added to set up the deployment environment for both the Cognee application and its PostgreSQL backend. Changes
Sequence Diagram(s)sequenceDiagram
participant U as User
participant A as OllamaAPIAdapter
participant FS as File System
participant C as ACLient
U->>A: call create_transcript(input_file)
A->>FS: Verify input_file exists?
alt File exists
A->>C: Call transcriptions.create(input_file, model, ...)
C-->>A: Return transcript
A-->>U: Return transcript
else File missing
A-->>U: Raise FileNotFoundError
end
sequenceDiagram
participant U as User
participant A as OllamaAPIAdapter
participant FS as File System
participant C as ACLient
U->>A: call transcribe_image(input_file)
A->>FS: Read and encode image as base64
A->>C: Call chat.completions.create(message, encoded_image, api_version, MAX_RETRIES)
C-->>A: Return image transcription result
A-->>U: Return result
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
⏰ Context from checks skipped due to timeout of 90000ms (32)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (10)
cognee/infrastructure/llm/ollama/adapter.py (1)
61-63: Remove commented code.Remove the commented code block as it's not being used.
.github/workflows/test_ollama.yml (2)
47-48: Typo in API Call Prompt:
There is a typo in the prompt text—"asnwer" appears instead of "answer". Correcting this will improve clarity.- curl -d '{"model": "llama3.2", "stream": false, "prompt":"Whatever I say, asnwer with Yes"}' http://localhost:11434/api/generate + curl -d '{"model": "llama3.2", "stream": false, "prompt":"Whatever I say, answer with Yes"}' http://localhost:11434/api/generate
84-84: YAML Formatting:
A newline character is missing at the end of the file. Please add a newline at the end to comply with YAML linting guidelines.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 84-84: no new line character at the end of file
(new-line-at-end-of-file)
.github/workflows/upgrade_deps.yml (1)
3-28: Enhanced Trigger Configuration:
The addition of new triggers forpushandpull_requestevents—specifically monitoring changes topoetry.lockandpyproject.tomlon both themainanddevbranches—as well as the optionaldebug_enabledinput forworkflow_dispatch, significantly improves the automation and flexibility of dependency updates. This configuration looks well thought-out.infra/README.md (2)
14-14: Improve clarity in repository cloning instructions.The sentence on line 14 reads, “Clone the Repository Clone this repository to your local machine and navigate to the directory.” Consider rephrasing to remove the duplication (e.g., “Clone the repository to your local machine and navigate into its directory.”).
16-16: Remove trailing punctuation from heading.The heading “## Deploy Helm Chart:” on line 16 ends with a colon. To comply with markdownlint’s MD026 rule, please remove the trailing punctuation.
Suggested change:-## Deploy Helm Chart: +## Deploy Helm Chart🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
16-16: Trailing punctuation in heading
Punctuation: ':'(MD026, no-trailing-punctuation)
infra/templates/postgres_service.yaml (1)
14-14: Remove unnecessary trailing blank line.A trailing blank line at the end (line 14) triggers a YAMLlint warning. Please remove it to enhance file consistency.
🧰 Tools
🪛 YAMLlint (1.35.1)
[warning] 14-14: too many blank lines
(1 > 0) (empty-lines)
infra/templates/cognee_deployment.yaml (1)
4-4: Consider Quoting Template Expressions for YAML Linting
The Helm templating expressions (e.g.{{ .Release.Name }}) can sometimes trigger YAML lint errors (as seen in the static analysis hint). You might consider wrapping these expressions in quotes (e.g."{{ .Release.Name }}-cognee") if YAMLlint issues become problematic.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
infra/docker-compose-helm.yml (1)
3-4: Nitpick: Extra Space in Image Declaration
The space betweenimageand the colon on line 3 (i.e. "image : cognee-backend:latest") is non-standard. Removing the extra space can improve stylistic consistency with typical YAML formatting.infra/templates/postgres_deployment.yaml (1)
4-4: Tip: Quote Helm Template Expressions if Necessary
Similar to thecognee_deployment.yaml, quoting expressions like{{ .Release.Name }}(e.g."{{ .Release.Name }}-postgres") can help avoid YAML lint errors, as noted by the static analysis tool.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
.github/workflows/test_gemini.yml(1 hunks).github/workflows/test_ollama.yml(1 hunks).github/workflows/upgrade_deps.yml(1 hunks)cognee/infrastructure/llm/ollama/adapter.py(2 hunks)infra/.gitignore(1 hunks)infra/Chart.yaml(1 hunks)infra/Dockerfile(1 hunks)infra/README.md(1 hunks)infra/docker-compose-helm.yml(1 hunks)infra/templates/cognee_deployment.yaml(1 hunks)infra/templates/cognee_service.yaml(1 hunks)infra/templates/postgres_deployment.yaml(1 hunks)infra/templates/postgres_pvc.yaml(1 hunks)infra/templates/postgres_service.yaml(1 hunks)infra/values.yaml(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- infra/.gitignore
- infra/Chart.yaml
🧰 Additional context used
🪛 YAMLlint (1.35.1)
infra/templates/postgres_service.yaml
[warning] 14-14: too many blank lines
(1 > 0) (empty-lines)
[error] 4-4: syntax error: expected , but found ''
(syntax)
infra/templates/postgres_pvc.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
infra/templates/postgres_deployment.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
.github/workflows/test_ollama.yml
[error] 84-84: no new line character at the end of file
(new-line-at-end-of-file)
infra/templates/cognee_deployment.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
infra/templates/cognee_service.yaml
[error] 4-4: syntax error: expected , but found ''
(syntax)
🪛 actionlint (1.7.4)
.github/workflows/test_gemini.yml
22-22: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
23-23: secret "EMBEDDING_API_KEY" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
24-24: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
25-25: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
26-26: secret "EMBEDDING_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
27-27: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
28-28: secret "EMBEDDING_MAX_TOKENS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
29-29: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
31-31: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
32-32: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
33-33: secret "LLM_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
🪛 Ruff (0.8.2)
cognee/infrastructure/llm/ollama/adapter.py
19-19: Redefinition of unused api_version from line 1
(F811)
🪛 GitHub Actions: lint | ruff format
cognee/infrastructure/llm/ollama/adapter.py
[error] 1-1: Ruff formatting check failed. Would reformat: cognee/infrastructure/llm/ollama/adapter.py. Run 'ruff format' to fix code style issues.
🪛 GitHub Actions: lint | ruff lint
cognee/infrastructure/llm/ollama/adapter.py
[error] 19-19: Ruff: F811 Redefinition of unused api_version from line 1. Remove definition: api_version.
🪛 markdownlint-cli2 (0.17.2)
infra/README.md
16-16: Trailing punctuation in heading
Punctuation: ':'
(MD026, no-trailing-punctuation)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Test on macos-13
- GitHub Check: windows-latest
- GitHub Check: Test on ubuntu-22.04
🔇 Additional comments (9)
cognee/infrastructure/llm/ollama/adapter.py (3)
13-17: LGTM! Good class structure improvements.The updated docstring and new class attributes improve code organization and maintainability.
19-26: LGTM! Constructor changes look good.The addition of the
api_versionparameter and its assignment is clean and follows good practices.🧰 Tools
🪛 Ruff (0.8.2)
19-19: Redefinition of unused
api_versionfrom line 1(F811)
🪛 GitHub Actions: lint | ruff lint
[error] 19-19: Ruff: F811 Redefinition of unused
api_versionfrom line 1. Remove definition:api_version.
75-103: LGTM! Well-implemented image transcription.The implementation follows best practices with proper error handling and consistent configuration usage.
.github/workflows/upgrade_deps.yml (1)
29-61: Job Steps and PR Creation:
The job steps (checkout, setting up Python, installing Poetry, updating dependencies, and creating the pull request viapeter-evans/create-pull-request) are implemented in a clear and sequential manner. Ensure that the branch names and tokens used here align with your overall release process.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 61-61: no new line character at the end of file
(new-line-at-end-of-file)
infra/values.yaml (1)
1-23: Configuration file approved.The
infra/values.yamlfile is well-structured, and the configurations for both the Cognee application and PostgreSQL services are clearly defined.infra/templates/cognee_deployment.yaml (1)
1-33: Overall Deployment Configuration Looks Good
The Kubernetes deployment is clearly defined with Helm templating. The structure—with metadata, spec, and container configuration—is well laid out and aligns with expected best practices for a Kubernetes deployment.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
infra/docker-compose-helm.yml (1)
1-47: Docker Compose Configuration Looks Cohesive
The file effectively defines thecogneeandpostgresservices along with the associated networks and volumes. It appears well integrated with the Helm chart and Kubernetes resources, ensuring a seamless transition between container orchestration and development.infra/Dockerfile (1)
1-46: Dockerfile Overall Structure is Sound
Aside from the package installation command noted above, the Dockerfile efficiently sets up the Python environment, configures Poetry, and copies the necessary artifacts including the application code and entrypoint script.infra/templates/postgres_deployment.yaml (1)
1-36: Postgres Deployment Configuration is Structurally Correct
The PostgreSQL deployment is well defined—with dynamic naming, environment variables, port configuration, and volume mounts—all managed via Helm templating. This ensures persistent storage and proper configuration for the database container.🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
|
|
||
| def create_transcript(self, input): | ||
| """Generate a audio transcript from a user query.""" | ||
|
|
||
| if not os.path.isfile(input): | ||
| raise FileNotFoundError(f"The file {input} does not exist.") | ||
|
|
||
| # with open(input, 'rb') as audio_file: | ||
| # audio_data = audio_file.read() | ||
|
|
||
| transcription = self.aclient.transcription( | ||
| model=self.transcription_model, | ||
| file=Path(input), | ||
| api_key=self.api_key, | ||
| api_base=self.endpoint, | ||
| api_version=self.api_version, | ||
| max_retries=self.MAX_RETRIES, | ||
| ) | ||
|
|
||
| return transcription | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initialize missing transcription_model attribute.
The transcription_model attribute used in create_transcript is not initialized in the constructor.
Add the following to the constructor:
def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None:
self.name = name
self.model = model
self.api_key = api_key
self.endpoint = endpoint
self.max_tokens = max_tokens
self.api_version = api_version
+ self.transcription_model = model # or add a new parameter if a different model is needed for transcription📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def create_transcript(self, input): | |
| """Generate a audio transcript from a user query.""" | |
| if not os.path.isfile(input): | |
| raise FileNotFoundError(f"The file {input} does not exist.") | |
| # with open(input, 'rb') as audio_file: | |
| # audio_data = audio_file.read() | |
| transcription = self.aclient.transcription( | |
| model=self.transcription_model, | |
| file=Path(input), | |
| api_key=self.api_key, | |
| api_base=self.endpoint, | |
| api_version=self.api_version, | |
| max_retries=self.MAX_RETRIES, | |
| ) | |
| return transcription | |
| def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None: | |
| self.name = name | |
| self.model = model | |
| self.api_key = api_key | |
| self.endpoint = endpoint | |
| self.max_tokens = max_tokens | |
| self.api_version = api_version | |
| self.transcription_model = model # or add a new parameter if a different model is needed for transcription |
| OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} | ||
| GRAPHISTRY_USERNAME: ${{ secrets.GRAPHISTRY_USERNAME }} | ||
| GRAPHISTRY_PASSWORD: ${{ secrets.GRAPHISTRY_PASSWORD }} | ||
| EMBEDDING_PROVIDER: "gemini" | ||
| EMBEDDING_API_KEY: ${{ secrets.GEMINI_API_KEY }} | ||
| EMBEDDING_MODEL: "gemini/text-embedding-004" | ||
| EMBEDDING_ENDPOINT: "https://generativelanguage.googleapis.com/v1beta/models/text-embedding-004" | ||
| EMBEDDING_API_VERSION: "v1beta" | ||
| EMBEDDING_DIMENSIONS: 768 | ||
| EMBEDDING_MAX_TOKENS: 8076 | ||
| LLM_PROVIDER: "gemini" | ||
| LLM_API_KEY: ${{ secrets.GEMINI_API_KEY }} | ||
| LLM_MODEL: "gemini/gemini-1.5-flash" | ||
| LLM_ENDPOINT: "https://generativelanguage.googleapis.com/" | ||
| LLM_API_VERSION: "v1beta" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Secret Mapping Mismatch in Reusable Workflow:
The secrets provided here—such as EMBEDDING_PROVIDER, EMBEDDING_API_KEY, EMBEDDING_MODEL, EMBEDDING_ENDPOINT, EMBEDDING_API_VERSION, EMBEDDING_DIMENSIONS, EMBEDDING_MAX_TOKENS, LLM_PROVIDER, LLM_MODEL, and LLM_ENDPOINT—are not defined as inputs in the referenced reusable workflow (./.github/workflows/reusable_python_example.yml), as noted by the static analysis hints. This mismatch may lead to runtime failures if these values are expected by the reusable workflow.
🧰 Tools
🪛 actionlint (1.7.4)
22-22: secret "EMBEDDING_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
23-23: secret "EMBEDDING_API_KEY" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
24-24: secret "EMBEDDING_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
25-25: secret "EMBEDDING_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
26-26: secret "EMBEDDING_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
27-27: secret "EMBEDDING_DIMENSIONS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
28-28: secret "EMBEDDING_MAX_TOKENS" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
29-29: secret "LLM_PROVIDER" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
31-31: secret "LLM_MODEL" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
32-32: secret "LLM_ENDPOINT" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
33-33: secret "LLM_API_VERSION" is not defined in "./.github/workflows/reusable_python_example.yml" reusable workflow. defined secrets are "GRAPHISTRY_PASSWORD", "GRAPHISTRY_USERNAME", "LLM_API_KEY", "OPENAI_API_KEY"
(workflow-call)
| apiVersion: v1 | ||
| kind: PersistentVolumeClaim | ||
| metadata: | ||
| name: {{ .Release.Name }}-postgres-pvc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote templated expression to resolve YAML syntax error.
The metadata name on line 4 is defined as:
name: {{ .Release.Name }}-postgres-pvc
This can trigger a YAML parsing error. Please enclose the templated expression in quotes:
- name: {{ .Release.Name }}-postgres-pvc
+ name: "{{ .Release.Name }}-postgres-pvc"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: {{ .Release.Name }}-postgres-pvc | |
| name: "{{ .Release.Name }}-postgres-pvc" |
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
| name: {{ .Release.Name }}-cognee | ||
| labels: | ||
| app: {{ .Release.Name }}-cognee |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote templated values in metadata to prevent YAML syntax issues.
The lines defining the service name and its label are:
To avoid YAML parsing errors, please enclose these template expressions in quotes:
- name: {{ .Release.Name }}-cognee
+ name: "{{ .Release.Name }}-cognee"
- app: {{ .Release.Name }}-cognee
+ app: "{{ .Release.Name }}-cognee"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: {{ .Release.Name }}-cognee | |
| labels: | |
| app: {{ .Release.Name }}-cognee | |
| name: "{{ .Release.Name }}-cognee" | |
| labels: | |
| app: "{{ .Release.Name }}-cognee" |
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
| name: {{ .Release.Name }}-postgres | ||
| labels: | ||
| app: {{ .Release.Name }}-postgres |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote templated values in metadata to resolve YAML syntax errors.
The metadata fields on lines 4 and 6 use unquoted template expressions:
Please update them by enclosing the expressions in quotes:
- name: {{ .Release.Name }}-postgres
+ name: "{{ .Release.Name }}-postgres"
- app: {{ .Release.Name }}-postgres
+ app: "{{ .Release.Name }}-postgres"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| name: {{ .Release.Name }}-postgres | |
| labels: | |
| app: {{ .Release.Name }}-postgres | |
| name: "{{ .Release.Name }}-postgres" | |
| labels: | |
| app: "{{ .Release.Name }}-postgres" |
🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 4-4: syntax error: expected , but found ''
(syntax)
| RUN apt-get update && apt-get install | ||
|
|
||
| RUN apt-get install -y \ | ||
| gcc \ | ||
| libpq-dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Critical: Merge Update and Install Commands
The Dockerfile contains an incomplete package installation command on line 11 (RUN apt-get update && apt-get install) followed by a full installation on line 13. This may lead to build failures. It is advisable to combine these steps to ensure a robust build process. For example:
- RUN apt-get update && apt-get install
-
- RUN apt-get install -y \
- gcc \
- libpq-dev
+ RUN apt-get update && apt-get install -y gcc libpq-dev📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN apt-get update && apt-get install | |
| RUN apt-get install -y \ | |
| gcc \ | |
| libpq-dev | |
| RUN apt-get update && apt-get install -y gcc libpq-dev |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
| # audio_data = audio_file.read() | ||
|
|
||
| transcription = self.aclient.transcription( | ||
| model=self.transcription_model, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be self.model here I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
cognee/infrastructure/llm/ollama/adapter.py (1)
54-72:⚠️ Potential issueInitialize missing transcription_model attribute.
The
transcription_modelattribute used increate_transcriptis not initialized in the constructor.Add the following to the constructor:
def __init__(self, endpoint: str, api_key: str, model: str, name: str, max_tokens: int, api_version: str = None) -> None: self.name = name self.model = model self.api_key = api_key self.endpoint = endpoint self.max_tokens = max_tokens self.api_version = api_version + self.transcription_model = model # or add a new parameter if a different model is needed for transcriptionAlso, add type hints to the
inputparameter:-def create_transcript(self, input): +def create_transcript(self, input: str | Path):Additionally, remove commented out code (lines 60-61) as it's not being used.
🧹 Nitpick comments (2)
cognee/infrastructure/llm/ollama/adapter.py (2)
14-16: Add documentation for class variables.Add docstrings to the class variables to explain their purpose.
api_version: str +"""The version of the API to use for requests.""" MAX_RETRIES = 5 +"""Maximum number of retries for API calls."""
7-9: Add import organization.Consider organizing imports according to PEP 8 conventions (standard library, third-party, local application).
-from typing import Type -from pydantic import BaseModel -import instructor -from cognee.infrastructure.llm.llm_interface import LLMInterface -from cognee.infrastructure.llm.config import get_llm_config -from openai import OpenAI -import base64 -from pathlib import Path -import os +import base64 +import os +from pathlib import Path +from typing import Type + +from openai import OpenAI +from pydantic import BaseModel +import instructor + +from cognee.infrastructure.llm.llm_interface import LLMInterface +from cognee.infrastructure.llm.config import get_llm_config
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cognee/infrastructure/llm/ollama/adapter.py(2 hunks)
🧰 Additional context used
🪛 GitHub Actions: lint | ruff format
cognee/infrastructure/llm/ollama/adapter.py
[error] 1-1: Ruff formatting check failed. 1 file would be reformatted. Please run 'ruff format' to fix code style issues in this file.
⏰ Context from checks skipped due to timeout of 90000ms (31)
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-13
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: lint (ubuntu-latest, 3.11.x)
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: lint (ubuntu-latest, 3.10.x)
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: docker-compose-test
- GitHub Check: run_simple_example_test
- GitHub Check: Build Cognee Backend Docker App Image
| self.api_key = api_key | ||
| self.endpoint = endpoint | ||
| self.max_tokens = max_tokens | ||
| self.api_version= api_version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix formatting issue.
Add a space after the equals sign.
- self.api_version= api_version
+ self.api_version = api_version📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.api_version= api_version | |
| self.api_version = api_version |
| def transcribe_image(self, input) -> BaseModel: | ||
| with open(input, "rb") as image_file: | ||
| encoded_image = base64.b64encode(image_file.read()).decode("utf-8") | ||
|
|
||
| return self.aclient.completion( | ||
| model=self.model, | ||
| messages=[ | ||
| { | ||
| "role": "user", | ||
| "content": [ | ||
| { | ||
| "type": "text", | ||
| "text": "What’s in this image?", | ||
| }, | ||
| { | ||
| "type": "image_url", | ||
| "image_url": { | ||
| "url": f"data:image/jpeg;base64,{encoded_image}", | ||
| }, | ||
| }, | ||
| ], | ||
| } | ||
| ], | ||
| api_key=self.api_key, | ||
| api_base=self.endpoint, | ||
| api_version=self.api_version, | ||
| max_tokens=300, | ||
| max_retries=self.MAX_RETRIES, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve the transcribe_image method.
The method needs several improvements:
- Add type hints to the
inputparameter - Add file existence check similar to
create_transcript - Use the instance's
max_tokensinstead of hardcoding 300 - Consider parameterizing the prompt text
-def transcribe_image(self, input) -> BaseModel:
+def transcribe_image(self, input: str | Path) -> BaseModel:
+ """Generate a description of an image.
+
+ Args:
+ input: Path to the image file
+
+ Returns:
+ BaseModel: The model's description of the image
+
+ Raises:
+ FileNotFoundError: If the image file does not exist
+ """
+ if not os.path.isfile(input):
+ raise FileNotFoundError(f"The file {input} does not exist.")
+
with open(input, "rb") as image_file:
encoded_image = base64.b64encode(image_file.read()).decode("utf-8")
return self.aclient.completion(
model=self.model,
messages=[
{
"role": "user",
"content": [
{
"type": "text",
- "text": "What's in this image?",
+ "text": "What's in this image?", # Consider making this a parameter
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{encoded_image}",
},
},
],
}
],
api_key=self.api_key,
api_base=self.endpoint,
api_version=self.api_version,
- max_tokens=300,
+ max_tokens=self.max_tokens,
max_retries=self.MAX_RETRIES,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def transcribe_image(self, input) -> BaseModel: | |
| with open(input, "rb") as image_file: | |
| encoded_image = base64.b64encode(image_file.read()).decode("utf-8") | |
| return self.aclient.completion( | |
| model=self.model, | |
| messages=[ | |
| { | |
| "role": "user", | |
| "content": [ | |
| { | |
| "type": "text", | |
| "text": "What’s in this image?", | |
| }, | |
| { | |
| "type": "image_url", | |
| "image_url": { | |
| "url": f"data:image/jpeg;base64,{encoded_image}", | |
| }, | |
| }, | |
| ], | |
| } | |
| ], | |
| api_key=self.api_key, | |
| api_base=self.endpoint, | |
| api_version=self.api_version, | |
| max_tokens=300, | |
| max_retries=self.MAX_RETRIES, | |
| ) | |
| def transcribe_image(self, input: str | Path) -> BaseModel: | |
| """Generate a description of an image. | |
| Args: | |
| input: Path to the image file | |
| Returns: | |
| BaseModel: The model's description of the image | |
| Raises: | |
| FileNotFoundError: If the image file does not exist | |
| """ | |
| if not os.path.isfile(input): | |
| raise FileNotFoundError(f"The file {input} does not exist.") | |
| with open(input, "rb") as image_file: | |
| encoded_image = base64.b64encode(image_file.read()).decode("utf-8") | |
| return self.aclient.completion( | |
| model=self.model, | |
| messages=[ | |
| { | |
| "role": "user", | |
| "content": [ | |
| { | |
| "type": "text", | |
| "text": "What's in this image?", # Consider making this a parameter | |
| }, | |
| { | |
| "type": "image_url", | |
| "image_url": { | |
| "url": f"data:image/jpeg;base64,{encoded_image}", | |
| }, | |
| }, | |
| ], | |
| } | |
| ], | |
| api_key=self.api_key, | |
| api_base=self.endpoint, | |
| api_version=self.api_version, | |
| max_tokens=self.max_tokens, | |
| max_retries=self.MAX_RETRIES, | |
| ) |
…to_core # Conflicts: # cognee/infrastructure/llm/ollama/adapter.py
Description
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin
Summary by CodeRabbit
New Features
Chores
Documentation