Skip to content

Conversation

@weikao
Copy link
Contributor

@weikao weikao commented Sep 4, 2025

Compatible private embeddeding model (openai format)

Description

DCO Affirmation

I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.

Compatible private embeddeding model (openai format)
@pull-checklist
Copy link

pull-checklist bot commented Sep 4, 2025

Please make sure all the checkboxes are checked:

  • I have tested these changes locally.
  • I have reviewed the code changes.
  • I have added end-to-end and unit tests (if applicable).
  • I have updated the documentation and README.md file (if necessary).
  • I have removed unnecessary code and debug statements.
  • PR title is clear and follows the convention.
  • I have tagged reviewers or team members for feedback.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 4, 2025

Walkthrough

Adds an "input" field mirroring the prompt to the JSON payload sent by OllamaEmbeddingEngine._get_embedding. No other logic, control flow, or public interfaces were changed.

Changes

Cohort / File(s) Summary
Embedding payload update
cognee/infrastructure/databases/vector/embeddings/OllamaEmbeddingEngine.py
In _get_embedding, request payload now includes "input": prompt alongside existing "model" and "prompt". No other modifications.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

I twitch my whiskers, light and bright,
A tiny field joins in the night—
"input" hops beside the "prompt,"
Two carrots in one tidy stomp.
Embeddings nod, “All set to go!”
Thump-thump—onward, swift and so. 🥕✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @weikao, thank you for submitting a PR! We will respond as soon as possible.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
cognee/infrastructure/databases/vector/embeddings/OllamaEmbeddingEngine.py (2)

107-112: Harden HTTP handling and support both Ollama and OpenAI response formats.

Right now non-2xx responses and OpenAI-style responses will raise. Add a status check and dual-path parse.

Apply:

         async with aiohttp.ClientSession() as session:
             async with session.post(
                 self.endpoint, json=payload, headers=headers, timeout=60.0
             ) as response:
-                data = await response.json()
-                return data["embedding"]
+                response.raise_for_status()
+                data = await response.json()
+                # Ollama format: {"embedding": [...]}
+                if "embedding" in data:
+                    return data["embedding"]
+                # OpenAI format: {"data": [{"embedding": [...], "index": 0}], ...}
+                if isinstance(data.get("data"), list) and data["data"]:
+                    item = data["data"][0]
+                    if isinstance(item, dict) and "embedding" in item:
+                        return item["embedding"]
+                raise KeyError("No embedding found in response payload")

47-47: Wire MAX_RETRIES into the retry decorator
Change the decorator on line 92 to pass the constant, e.g.:

-    @embedding_sleep_and_retry_async()
+    @embedding_sleep_and_retry_async(max_retries=MAX_RETRIES)
     async def _get_embedding(self, prompt: str) -> List[float]:

This ensures the MAX_RETRIES constant is actually used.

🧹 Nitpick comments (3)
cognee/infrastructure/databases/vector/embeddings/OllamaEmbeddingEngine.py (3)

97-101: Use only "input" for embeddings; drop "prompt" to avoid ambiguity.

Ollama and OpenAI embeddings expect "input". Keeping both "prompt" and "input" is redundant and may confuse proxy services or schema validators.

Apply:

-        payload = {
-            "model": self.model,
-            "prompt": prompt,
-            "input": prompt
-        }
+        payload = {
+            "model": self.model,
+            "input": prompt
+        }

89-90: Reuse a single aiohttp session per batch to cut connection overhead.

Each prompt opens a new session; reuse one session across the gather to reduce latency and socket churn.

Apply:

-        embeddings = await asyncio.gather(*[self._get_embedding(prompt) for prompt in text])
+        async with aiohttp.ClientSession() as session:
+            embeddings = await asyncio.gather(
+                *[self._get_embedding(prompt, session=session) for prompt in text]
+            )
         return embeddings
@@
-    async def _get_embedding(self, prompt: str) -> List[float]:
+    async def _get_embedding(self, prompt: str, session: Optional[aiohttp.ClientSession] = None) -> List[float]:
@@
-        async with aiohttp.ClientSession() as session:
-            async with session.post(
+        _session = session or aiohttp.ClientSession()
+        try:
+            async with _session.post(
                 self.endpoint, json=payload, headers=headers, timeout=60.0
             ) as response:
                 ...
+        finally:
+            if session is None:
+                await _session.close()

Also applies to: 92-93, 107-108


64-68: Simplify env parsing; remove dead isinstance check.

os.getenv returns str; the isinstance branch never runs.

Apply:

-        enable_mocking = os.getenv("MOCK_EMBEDDING", "false")
-        if isinstance(enable_mocking, bool):
-            enable_mocking = str(enable_mocking).lower()
-        self.mock = enable_mocking in ("true", "1", "yes")
+        enable_mocking = os.getenv("MOCK_EMBEDDING", "false").lower()
+        self.mock = enable_mocking in ("true", "1", "yes")
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8b6aaff and ac8f30f.

📒 Files selected for processing (1)
  • cognee/infrastructure/databases/vector/embeddings/OllamaEmbeddingEngine.py (1 hunks)

@Vasilije1990
Copy link
Contributor

@weikao what is the purpose of this PR and what problem does it solve?

@Vasilije1990 Vasilije1990 self-requested a review September 15, 2025 21:43
@Vasilije1990 Vasilije1990 merged commit 3a073ca into topoteretes:main Sep 15, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants