Skip to content

Conversation

@lxobr
Copy link
Collaborator

@lxobr lxobr commented Mar 31, 2025

Description

  • Added new graph creation prompts
  • Exposed graph creation prompts in .cognify via get_default tasks
  • Exposed graph creation prompts in eval framework

DCO Affirmation

I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.

@pull-checklist
Copy link

pull-checklist bot commented Mar 31, 2025

Please make sure all the checkboxes are checked:

  • I have tested these changes locally.
  • I have reviewed the code changes.
  • I have added end-to-end and unit tests (if applicable).
  • I have updated the documentation and README.md file (if necessary).
  • I have removed unnecessary code and debug statements.
  • PR title is clear and follows the convention.
  • I have tagged reviewers or team members for feedback.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 31, 2025

Walkthrough

This pull request makes formatting and import improvements across several Python modules, notably introducing the Optional import from the typing module and reformatting function signatures for clarity. It modifies the control flow in the answer evaluation process to conditionally set the retrieval context and adjusts the configuration of the knowledge graph extraction logic to retrieve prompt paths dynamically. Additionally, several new text files have been added to the prompts directory, detailing guidelines for QA benchmarks and knowledge graph construction.

Changes

Files Change Summary
cognee/eval_framework/corpus_builder/run_corpus_builder.py, cognee/eval_framework/evaluation/deep_eval_adapter.py, cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py, cognee/tasks/graph/extract_graph_from_data.py, cognee/infrastructure/llm/config.py Added Optional import; reformatted function signature (parameters on new lines) for clarity; modified conditional assignment in evaluate_answers; updated prompt generation logic to retrieve graph_prompt_path dynamically; introduced a new attribute in LLMConfig.
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt, .../answer_simple_question_benchmark3.txt, .../answer_simple_question_benchmark4.txt, .../generate_graph_prompt_guided.txt, .../generate_graph_prompt_oneshot.txt, .../generate_graph_prompt_simple.txt, .../generate_graph_prompt_strict.txt Introduced several new text files that provide detailed guidelines for QA benchmark responses and structured approaches for knowledge graph construction.

Sequence Diagram(s)

sequenceDiagram
    participant C as Client
    participant D as DeepEvalAdapter.evaluate_answers
    participant A as Answer Dict
    C->>D: Call evaluate_answers(answer)
    D->>A: Check for "golden_context" key
    alt golden_context exists
        D->>A: Assign retrieval_context = [answer["retrieval_context"]]
    else Golden context absent
        D->>A: Assign retrieval_context = None
    end
    D->>C: Return evaluated result
Loading
sequenceDiagram
    participant C as Client
    participant E as extract_content_graph Function
    participant L as LLMConfig (via get_llm_config)
    participant R as render_prompt
    C->>E: Call extract_content_graph(content)
    E->>L: Retrieve LLM configuration (graph_prompt_path)
    L-->>E: Return prompt path
    E->>R: Call render_prompt(prompt_path)
    R-->>E: Return rendered system prompt
    E->>C: Return extracted content graph
Loading

Suggested reviewers

  • hajdul88
  • dexters1

Poem

I'm a rabbit hopping along the code lane,
Watching changes bloom like a springtime refrain.
New prompts and functions, neat and precise,
Each line a little dance, each fix very nice!
With a twitch of my nose, I cheer these updates bright,
Bounding in joy through the evolving night!
🐇💻 Enjoy the smooth sprint under digital light!

✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai plan to trigger planning for file edits and PR creation.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@lxobr lxobr requested a review from hajdul88 March 31, 2025 13:39
@lxobr lxobr self-assigned this Mar 31, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (1)
cognee/eval_framework/corpus_builder/corpus_builder_executor.py (1)

42-69: 💡 Verification agent

🧩 Analysis chain

Verify task_getter implementation is updated

Ensure that the implementation of the task_getter function (which this code calls) has been updated to handle the new graph_prompt_path parameter correctly.


🏁 Script executed:

#!/bin/bash
# Find the task_getter implementations that might need updating
echo "Searching for task_getter implementation and related functions..."
rg "def get_default_tasks" -A 5 
rg "task_getter" -A 5 -B 5

Length of output: 12400


Action: Ensure All Task Getter Variants Properly Handle graph_prompt_path

It appears that CorpusBuilderExecutor.build_corpus now always calls the task getter with the new graph_prompt_path parameter:

tasks = await self.task_getter(
    chunk_size=chunk_size, chunker=chunker, graph_prompt_path=graph_prompt_path
)

However, our search indicates that some task getter implementations—such as the default ones (e.g., in get_default_tasks_by_indices.py and get_default_tasks in cognee/api/v1/cognify/cognify.py)—do not include graph_prompt_path in their signatures. Please verify that:

  • If using the "CascadeGraph" task getter: Its implementation (likely in get_cascade_graph_tasks.py) accepts and correctly applies the graph_prompt_path parameter.
  • For default or other task getter types: Either update their function signatures to accept the new parameter or explicitly ignore it (e.g., via **kwargs) so that passing graph_prompt_path does not lead to runtime errors.

Review the getter function selection in cognee/eval_framework/corpus_builder/task_getters/TaskGetters.py to ensure that every variant is compatible with this change.

🧹 Nitpick comments (4)
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt (1)

1-8: Style and Consistency Improvement

The guidelines are well-defined, but the repeated use of "For" at the start of several bullet points (lines 3–6) may be perceived as repetitive. Consider omitting "For" to enhance readability and stylistic variety. For example, the lines could be revised as follows:

- - For yes/no questions: answer with "yes" or "no".
- For what/who/where questions: reply with a single word or brief phrase.
- For when questions: return only the relevant date/time.
- For how/why questions: use the briefest phrase.
+ - Yes/no questions: answer with "yes" or "no".
+ - What/who/where questions: reply with a single word or brief phrase.
+ - When questions: return only the relevant date/time.
+ - How/why questions: use the briefest phrase.
🧰 Tools
🪛 LanguageTool

[style] ~5-~5: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...y with a single word or brief phrase. - For when questions: return only the relevan...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~6-~6: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...: return only the relevant date/time. - For how/why questions: use the briefest phr...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)

12-13: Consider extracting the default prompt path as a constant.

The default prompt file is hardcoded. For better maintainability, consider extracting this value as a module-level constant.

+DEFAULT_GRAPH_PROMPT_PATH = "generate_graph_prompt.txt"
+
 async def extract_content_graph(
     content: str, response_model: Type[BaseModel], graph_prompt_path: Optional[str] = None
 ):
     llm_client = get_llm_client()
 
-    prompt_path = graph_prompt_path or "generate_graph_prompt.txt"
+    prompt_path = graph_prompt_path or DEFAULT_GRAPH_PROMPT_PATH
     system_prompt = render_prompt(prompt_path, {})
cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt (1)

1-151: Well-structured knowledge graph extraction prompt with comprehensive guidelines.

This is a well-organized prompt with clear sections covering node guidelines, property formatting, relationship definitions, and output requirements. The one-shot examples for each section provide excellent guidance for the language model.

A minor grammar correction: on line 83, add a comma after the year in the date: "September 4, 1998, and has a market cap..."

-> **One-Shot Example**:
-> **Input**: "Google was founded on September 4, 1998 and has a market cap of 800000000000."
+> **One-Shot Example**:
+> **Input**: "Google was founded on September 4, 1998, and has a market cap of 800000000000."
🧰 Tools
🪛 LanguageTool

[grammar] ~30-~30: Did you mean the adjective “useful”?
Context: ...ived directly from the text. - Always use full, canonical names. - Do not use in...

(THANK_FULL)


[style] ~83-~83: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...**: "Google was founded on September 4, 1998 and has a market cap of 800000000000." ...

(MISSING_COMMA_AFTER_YEAR)


[style] ~127-~127: ‘absolutely essential’ might be wordy. Consider a shorter alternative.
Context: ...y edges (e.g., "X is a concept") unless absolutely essential. ### 4.3 Inferred Facts - Rule: On...

(EN_WORDINESS_PREMIUM_ABSOLUTELY_ESSENTIAL)


[uncategorized] ~130-~130: Possible missing comma found.
Context: ...pported by the text, or those logically implied if they enhance clarity. - Do not a...

(AI_HYDRA_LEO_MISSING_COMMA)

cognee/infrastructure/llm/prompts/generate_graph_prompt_strict.txt (1)

1-88: Well-structured prompt for knowledge graph extraction

This prompt provides clear, structured instructions for extracting knowledge graphs from unstructured text. It effectively defines entity types, relationship handling, and important constraints for creating consistent graphs.

However, consider the following enhancements:

  1. Include more examples of relationship types beyond "acted_in" and "founded_by"
  2. Add guidelines for handling relative time expressions (e.g., "last year")
  3. Provide instructions for handling hierarchical relationships (e.g., "is_a", "subclass_of")
  4. Consider addressing uncertainty expressions or negations in text
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c72d16c and 296d896.

📒 Files selected for processing (13)
  • cognee/api/v1/cognify/cognify.py (2 hunks)
  • cognee/eval_framework/corpus_builder/corpus_builder_executor.py (1 hunks)
  • cognee/eval_framework/corpus_builder/run_corpus_builder.py (3 hunks)
  • cognee/eval_framework/evaluation/deep_eval_adapter.py (1 hunks)
  • cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/answer_simple_question_benchmark3.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/answer_simple_question_benchmark4.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/generate_graph_prompt_guided.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/generate_graph_prompt_simple.txt (1 hunks)
  • cognee/infrastructure/llm/prompts/generate_graph_prompt_strict.txt (1 hunks)
  • cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1 hunks)
  • cognee/tasks/graph/extract_graph_from_data.py (2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)
cognee/modules/chunking/TextChunker.py (1)
  • TextChunker (11-78)
🪛 LanguageTool
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt

[style] ~5-~5: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...y with a single word or brief phrase. - For when questions: return only the relevan...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~6-~6: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...: return only the relevant date/time. - For how/why questions: use the briefest phr...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt

[grammar] ~30-~30: Did you mean the adjective “useful”?
Context: ...ived directly from the text. - Always use full, canonical names. - Do not use in...

(THANK_FULL)


[style] ~83-~83: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...**: "Google was founded on September 4, 1998 and has a market cap of 800000000000." ...

(MISSING_COMMA_AFTER_YEAR)


[style] ~127-~127: ‘absolutely essential’ might be wordy. Consider a shorter alternative.
Context: ...y edges (e.g., "X is a concept") unless absolutely essential. ### 4.3 Inferred Facts - Rule: On...

(EN_WORDINESS_PREMIUM_ABSOLUTELY_ESSENTIAL)


[uncategorized] ~130-~130: Possible missing comma found.
Context: ...pported by the text, or those logically implied if they enhance clarity. - Do not a...

(AI_HYDRA_LEO_MISSING_COMMA)

⏰ Context from checks skipped due to timeout of 90000ms (35)
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-15
  • GitHub Check: test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: chromadb test
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: windows-latest
  • GitHub Check: Test cognee server start
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_simple_example_test
  • GitHub Check: lint (ubuntu-latest, 3.10.x)
  • GitHub Check: docker-compose-test
  • GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (20)
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark4.txt (1)

1-15: Clear and Comprehensive Guidelines

The prompt outlines concise, step-by-step instructions in a logical order. The categorization of response types (minimalism, question-specific responses, formatting, and context-only) is well-defined, enhancing the clarity for the QA system. One minor suggestion is to verify that the phrasing and tone here remain consistently aligned with the other benchmark files in the project.

cognee/infrastructure/llm/prompts/answer_simple_question_benchmark3.txt (1)

1-9: Consistent and Concise Prompt

The atomic response system instructions are clear, concise, and formatted in a way that mirrors the expectations of the project. The separation of different response types is intuitive, ensuring that the system can generate minimalistic answers as intended.

cognee/eval_framework/evaluation/deep_eval_adapter.py (1)

36-38: Improved retrieval context handling in evaluation

This change allows for more flexible evaluation by conditionally including retrieval context only when the golden context is available, which should prevent incorrect evaluations when no context is provided.

cognee/infrastructure/llm/prompts/generate_graph_prompt_simple.txt (1)

1-28: Well-structured simple graph extraction prompt

This prompt provides clear and concise guidelines for knowledge graph construction with specific rules for node labeling, relationships, and data formats. The snake_case convention for edges and human-readable node IDs will promote consistency.

A few suggestions for potential improvements:

  • Consider adding examples of correctly formatted graph outputs
  • Include guidance on handling ambiguous entity references
cognee/infrastructure/llm/prompts/generate_graph_prompt_guided.txt (1)

1-78: Comprehensive guided approach to knowledge graph extraction

This detailed prompt provides excellent structure with clear sections covering all aspects of knowledge graph creation. The additional information about property formatting, relationship direction, and inferred facts will likely produce higher quality graphs compared to the simple prompt.

The examples provided for each guideline are particularly helpful, especially the visual representation of edge directionality in lines 61-62.

cognee/tasks/graph/extract_graph_from_data.py (4)

2-2: Updated imports to support optional typing

The addition of Optional to the imports is appropriate for the new parameter.


57-57: Added flexibility with optional graph prompt path parameter

This new parameter allows for customizing the graph extraction prompt, which integrates well with the newly added prompt files.


61-64: Updated extract_content_graph call to use custom prompt

The function call now correctly passes the graph_prompt_path parameter to extract_content_graph, allowing for customization of the extraction process.


53-58:

Details

✅ Verification successful

Verify the parameter is propagated throughout the dependency chain

While the parameter is correctly added and passed to extract_content_graph, let's verify that all dependent functions handle this parameter correctly.


🏁 Script executed:

#!/bin/bash
# Check if extract_content_graph function accepts the graph_prompt_path parameter
rg -A 5 "def extract_content_graph" --include "*.py"

Length of output: 496


🏁 Script executed:

#!/bin/bash
# Verify that extract_content_graph function accepts the graph_prompt_path parameter 
# by searching for its definition in python files using the correct glob flag.
rg -A 10 "def extract_content_graph" --glob '*.py'

Length of output: 1242


Parameter propagation verified in dependency chain

The graph_prompt_path parameter is correctly passed from extract_graph_from_data to the extract_content_graph function. In the file cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py, the parameter is defined as an optional argument and is used to establish the prompt path (defaulting to "generate_graph_prompt.txt" when not provided).

No further changes are needed.

🧰 Tools
🪛 Ruff (0.8.2)

56-56: Do not perform function call OntologyResolver in argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable

(B008)

cognee/eval_framework/corpus_builder/run_corpus_builder.py (3)

3-3: Clean import addition for Optional type.

The addition of Optional from the typing module properly supports the new parameter in the function signature.


36-42: LGTM! Well-structured function signature update.

The addition of the optional graph_prompt_path parameter with proper typing and default value maintains backward compatibility while enabling new functionality.


61-61: Parameter correctly propagated to build_corpus.

The new parameter is properly passed to the downstream method, ensuring the graph prompt path is available throughout the corpus building process.

cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (2)

1-1: Appropriate import addition for Optional type.

The import of Optional from typing is correctly added to support the new parameter's type annotation.


7-9: Function signature updated appropriately.

The extract_content_graph function signature is properly updated to include the optional graph_prompt_path parameter with correct typing.

cognee/api/v1/cognify/cognify.py (2)

114-114: Parameter addition is clean and consistent.

The addition of the optional graph_prompt_path parameter to the get_default_tasks function maintains the same pattern used for other optional parameters.


135-135: Correctly passes the parameter to the Task constructor.

The parameter is properly propagated to the extract_graph_from_data task, enabling custom graph prompts throughout the pipeline.

cognee/eval_framework/corpus_builder/corpus_builder_executor.py (4)

42-50: Parameter addition is consistent with design

The graph_prompt_path parameter has been correctly added with an appropriate Optional[str] type and default value of None, maintaining backward compatibility.


54-56: Parameter correctly forwarded to run_cognee

The graph_prompt_path parameter is properly passed from build_corpus to run_cognee, ensuring consistent parameter propagation.


59-61: Method signature updated correctly

The run_cognee method signature has been properly updated to include the new parameter with the same type and default value.


67-69: Parameter correctly forwarded to task_getter

The graph_prompt_path parameter is properly passed to the task_getter function, completing the parameter propagation chain.

@lxobr lxobr requested a review from borisarzentar April 2, 2025 07:49
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
cognee/tasks/graph/extract_graph_from_data.py (1)

2-2: Unused Import: Optional from typing
The imported Optional is not used anywhere in the file. To clean up the code and improve maintainability, consider removing Optional if it is not needed.

🧰 Tools
🪛 Ruff (0.8.2)

2-2: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)

1-1: Remove unused import Optional.

The Optional import from typing is not being used in this file and should be removed.

-from typing import Type, Optional
+from typing import Type
🧰 Tools
🪛 Ruff (0.8.2)

1-1: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 296d896 and e0f7057.

📒 Files selected for processing (3)
  • cognee/infrastructure/llm/config.py (2 hunks)
  • cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1 hunks)
  • cognee/tasks/graph/extract_graph_from_data.py (1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)
cognee/infrastructure/llm/config.py (1)
  • get_llm_config (92-93)
🪛 Ruff (0.8.2)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py

1-1: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

cognee/tasks/graph/extract_graph_from_data.py

2-2: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

⏰ Context from checks skipped due to timeout of 90000ms (33)
  • GitHub Check: Test on macos-15
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: chromadb test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: windows-latest
  • GitHub Check: docker-compose-test
  • GitHub Check: Build Cognee Backend Docker App Image
  • GitHub Check: run_simple_example_test
🔇 Additional comments (4)
cognee/infrastructure/llm/config.py (2)

18-18: The addition of graph_prompt_path enhances configuration flexibility.

Adding this new attribute to the LLMConfig class allows for customizable graph generation prompts, aligning well with the PR's objective of adding graph creation prompts.


87-87: Good implementation: graph_prompt_path properly included in the serialized config.

The to_dict() method has been correctly updated to include the new attribute, ensuring it's accessible when the configuration is converted to a dictionary format.

cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (2)

5-5: LGTM! Import added for the configuration retrieval.

This import allows the function to access the LLM configuration, which is needed to get the customized graph prompt path.


10-13: Well implemented dynamic prompt path integration.

This change improves flexibility by retrieving the prompt path from the configuration instead of hardcoding it. The refactoring allows for different graph generation prompts to be used based on configuration, which supports the PR's objective of adding graph creation prompts to the project.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)

3-3: Remove unused import Optional.

The Optional type from the typing module is imported but not used anywhere in this file. To maintain clean imports, please remove it.

-from typing import List, Optional
+from typing import List
🧰 Tools
🪛 Ruff (0.8.2)

3-3: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e0f7057 and 7854830.

📒 Files selected for processing (1)
  • cognee/eval_framework/corpus_builder/run_corpus_builder.py (2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
cognee/eval_framework/corpus_builder/run_corpus_builder.py

3-3: typing.Optional imported but unused

Remove unused import: typing.Optional

(F401)

⏰ Context from checks skipped due to timeout of 90000ms (33)
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: chromadb test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: windows-latest
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: docker-compose-test
  • GitHub Check: run_simple_example_test
  • GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)

36-41: LGTM! Good function signature formatting.

The improved formatting with parameters on separate lines enhances readability, especially for functions with multiple parameters. This change follows good Python style guidelines.

@borisarzentar borisarzentar merged commit 8207dc8 into dev Apr 3, 2025
41 of 42 checks passed
@borisarzentar borisarzentar deleted the feature/cog-1496-dreamify-add-graph-creation-prompt-hyperparameter-arxiv branch April 3, 2025 09:14
@coderabbitai coderabbitai bot mentioned this pull request Aug 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants