Skip to content

Conversation

@hajdul88
Copy link
Collaborator

@hajdul88 hajdul88 commented Feb 19, 2025

Description

This PR contains eval framework changes due to the autooptimizer integration

DCO Affirmation

I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin

Summary by CodeRabbit

  • New Features

    • Enhanced answer generation now returns structured answer details.
    • Search functionality accepts configurable prompt inputs.
    • Option to generate a metrics dashboard from evaluations.
    • Corpus building tasks now support adjustable chunk settings for greater flexibility.
    • New task retrieval functionality allows for flexible task configuration.
    • Introduced new methods for creating and managing metrics dashboards.
  • Refactor/Chore

    • Streamlined API signatures and reorganized module interfaces for better consistency.
    • Updated import paths to reflect new module structure.
  • Tests

    • Updated test scenarios to align with new configurations and parameter adjustments.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 19, 2025

Walkthrough

This update refactors several modules by updating import paths from the old evals namespace to the new cognee namespace, adjusting function signatures, and enhancing functionality. Notable changes include adding new parameters (e.g., system_prompt, chunk_size, chunker) to improve configurability, modifying return types (e.g., from None to List[dict]), and removing obsolete dashboard logic in evaluation. Test files and workflow files have also been updated accordingly.

Changes

File(s) Change Summary
cognee/eval_framework/answer_generation/run_question_answering_module.py
cognee/eval_framework/answer_generation/answer_generation_executor.py
Updated QA function signatures to add system_prompt/system_prompt_path parameters; changed return types; enhanced logging and lambda functions.
cognee/eval_framework/benchmark_adapters/*.py Modified import paths from evals.eval_framework.benchmark_adapters to cognee.eval_framework.benchmark_adapters; adjusted return types in dummy adapter.
cognee/eval_framework/corpus_builder/*.py and cognee/eval_framework/corpus_builder/task_getters/*.py Updated corpus builder and task getter methods to include new parameters chunk_size and chunker; updated import paths; introduced new DefaultTaskGetter class.
cognee/eval_framework/evaluation/*.py and cognee/eval_framework/run_eval.py Updated evaluation module signatures (return types and imports); removed dashboard generation in run_evaluation_module.py and added conditional dashboard generation in run_eval.py.
cognee/api/v1/cognify/cognify_v2.py
cognee/api/v1/search/search_v2.py
Extended API function signatures with additional parameters (chunk_size, chunker, system_prompt_path); updated import paths.
cognee/modules/retrieval/base_retriever.py
cognee/modules/search/methods/search.py
Removed the as_search method; updated search functions to include the new system_prompt_path parameter.
cognee/tests/unit/eval_framework/* Updated test files to reflect refactored imports and new function signatures; adjusted mocks and assertions accordingly.
.github/workflows/test_eval_framework.yml Updated the example-location parameter to reference the new path under cognee.

Sequence Diagram(s)

sequenceDiagram
    participant Main as main()
    participant Corpus as run_corpus_builder
    participant QA as run_question_answering
    participant Eval as run_evaluation
    participant Dashboard as generate_metrics_dashboard

    Main->>Corpus: run_corpus_builder(params, chunk_size, chunker)
    Main->>QA: run_question_answering(params, system_prompt)
    Main->>Eval: run_evaluation(params)
    alt Dashboard requested
        Main->>Dashboard: generate_metrics_dashboard(json_data, output_file, benchmark)
    end
Loading

Possibly related PRs

  • test: answer generation [COG-1234] #569: The changes in the main PR, specifically the modifications to the question_answering_non_parallel function, are directly related to the test implementation in the retrieved PR, which tests this function's behavior with the new parameters.
  • Feat/cog 1331 modal run eval #576: The changes in the main PR, which modify the run_question_answering function to include a new parameter and change its return type, are related to the changes in the retrieved PR, which also introduces new parameters and modifies the functionality of the evaluation pipeline.

Suggested reviewers

  • Vasilije1990
  • alekszievr

Poem

O hopping through lines of code I roam,
Updating paths to bring our modules home,
Added prompts with care and grace,
Chunking data at a rapid pace,
My whiskers twitch with each new tweak,
A bunny's joy in every function we seek!
🐇💻


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 93d2953 and 1ba190e.

📒 Files selected for processing (1)
  • cognee/api/v1/cognify/cognify_v2.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cognee/api/v1/cognify/cognify_v2.py
⏰ Context from checks skipped due to timeout of 90000ms (30)
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: windows-latest
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: run_simple_example_test
  • GitHub Check: docker-compose-test
  • GitHub Check: Build Cognee Backend Docker App Image

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@hajdul88 hajdul88 marked this pull request as draft February 19, 2025 15:00
@hajdul88 hajdul88 self-assigned this Feb 19, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
cognee/eval_framework/evaluation/run_evaluation_module.py (1)

31-54: Add error handling around metrics variable.

While the return type change is good, consider adding error handling around the metrics variable to ensure it's not None or empty before returning.

 metrics = await evaluator.execute(
     answers=answers, evaluator_metrics=params["evaluation_metrics"]
 )
+if not metrics:
+    logging.warning("No metrics were generated during evaluation")
+    return []
 with open(params["metrics_path"], "w", encoding="utf-8") as f:
     json.dump(metrics, f, ensure_ascii=False, indent=4)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2a167fa and 5cac85b.

📒 Files selected for processing (16)
  • cognee/eval_framework/answer_generation/run_question_answering_module.py (3 hunks)
  • cognee/eval_framework/benchmark_adapters/benchmark_adapters.py (1 hunks)
  • cognee/eval_framework/benchmark_adapters/dummy_adapter.py (1 hunks)
  • cognee/eval_framework/benchmark_adapters/hotpot_qa_adapter.py (1 hunks)
  • cognee/eval_framework/benchmark_adapters/musique_adapter.py (1 hunks)
  • cognee/eval_framework/benchmark_adapters/twowikimultihop_adapter.py (1 hunks)
  • cognee/eval_framework/corpus_builder/corpus_builder_executor.py (1 hunks)
  • cognee/eval_framework/corpus_builder/run_corpus_builder.py (3 hunks)
  • cognee/eval_framework/corpus_builder/task_getters/default_task_getter.py (1 hunks)
  • cognee/eval_framework/corpus_builder/task_getters/task_getters.py (1 hunks)
  • cognee/eval_framework/evaluation/deep_eval_adapter.py (1 hunks)
  • cognee/eval_framework/evaluation/evaluation_executor.py (1 hunks)
  • cognee/eval_framework/evaluation/evaluator_adapters.py (1 hunks)
  • cognee/eval_framework/evaluation/run_evaluation_module.py (3 hunks)
  • cognee/eval_framework/run_eval.py (2 hunks)
  • cognee/eval_framework/tests/unit/benchmark_adapters_test.py (1 hunks)
✅ Files skipped from review due to trivial changes (12)
  • cognee/eval_framework/corpus_builder/task_getters/task_getters.py
  • cognee/eval_framework/evaluation/evaluator_adapters.py
  • cognee/eval_framework/benchmark_adapters/musique_adapter.py
  • cognee/eval_framework/benchmark_adapters/hotpot_qa_adapter.py
  • cognee/eval_framework/benchmark_adapters/benchmark_adapters.py
  • cognee/eval_framework/evaluation/evaluation_executor.py
  • cognee/eval_framework/benchmark_adapters/twowikimultihop_adapter.py
  • cognee/eval_framework/corpus_builder/task_getters/default_task_getter.py
  • cognee/eval_framework/benchmark_adapters/dummy_adapter.py
  • cognee/eval_framework/corpus_builder/corpus_builder_executor.py
  • cognee/eval_framework/evaluation/deep_eval_adapter.py
  • cognee/eval_framework/tests/unit/benchmark_adapters_test.py
⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: docker-compose-test
🔇 Additional comments (7)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (2)

3-5: LGTM! Import changes align with the new return type and namespace.

The addition of List type and updated import path for CorpusBuilderExecutor reflect the namespace change from evals to cognee.


31-31: LGTM! Return type change improves function usability.

The function now returns the questions list instead of None, which is more useful for the evaluation pipeline and aligns with the return type changes in other modules.

Also applies to: 48-48

cognee/eval_framework/evaluation/run_evaluation_module.py (1)

3-4: LGTM! Import changes align with the new return type and namespace.

The addition of List type and updated import path for EvaluationExecutor reflect the namespace change from evals to cognee.

cognee/eval_framework/answer_generation/run_question_answering_module.py (2)

3-4: LGTM! Import changes align with the new return type and namespace.

The addition of List type and updated import path for answer generation modules reflect the namespace change from evals to cognee.


34-34: LGTM! Return type change improves function usability.

The function now returns the answers list instead of None, which is more useful for the evaluation pipeline and aligns with the return type changes in other modules.

Also applies to: 57-57

cognee/eval_framework/run_eval.py (2)

4-4: LGTM! Import paths updated consistently.

The import statements have been properly updated to reflect the new cognee namespace, and the new import for metrics dashboard generation is appropriately placed.

Also applies to: 6-11


32-33:

Details

✅ Verification successful

Verify the metrics flow between evaluation and dashboard generation.

Since the dashboard generation has been moved from run_evaluation_module.py to this file, please verify:

  1. That run_evaluation no longer attempts to generate the dashboard
  2. That the metrics data flows correctly between the evaluation and dashboard generation

🏁 Script executed:

#!/bin/bash
# Description: Verify the metrics flow and dashboard generation logic

# Check if run_evaluation still contains dashboard generation code
echo "Checking run_evaluation_module.py for dashboard generation..."
rg -A 5 "generate_metrics_dashboard" cognee/eval_framework/evaluation/run_evaluation_module.py

# Check the metrics path usage
echo "Checking metrics path usage..."
rg "metrics_path" cognee/eval_framework/

Length of output: 720


Dashboard Integration Verified – No Further Changes Required

The verification confirms that:

  • The dashboard generation code (i.e., any calls to “generate_metrics_dashboard”) has been successfully removed from run_evaluation_module.py.
  • The metrics_path is correctly defined in eval_config.py, passed via eval_params, and used appropriately in both run_evaluation_module.py and run_eval.py.

Everything appears to be in order with the metrics flow and dashboard integration.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
cognee/eval_framework/metrics_dashboard.py (4)

7-22: Consider merging multiple metrics into a combined subplot for clarity.

If multiple metrics grow large in number, displaying individual separate histograms may get unwieldy. Combining them into subplots or a single figure with different traces can improve readability and reduce clutter.


25-50: Use a consistent color scheme for confidence interval bars.

Currently, the distribution histograms and the confidence bar charts may have differing colors, making them appear disjoint. Adopting a unified color scheme or legend can provide a more cohesive visual theme.


98-130: Include meta tags for improved HTML readability and responsiveness.

Adding <meta charset="UTF-8"> and <meta name="viewport" content="width=device-width, initial-scale=1"> inside <head> can improve accessibility and ensure the page correctly adapts for mobile devices.


133-172: Add error handling or validation for JSON parsing.

Currently, the code assumes that both JSON files contain valid fields. In case of missing fields or malformed JSON, execution will fail abruptly. Consider gracefully handling such scenarios by validating or wrapping JSON operations in a try/except block.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cead5c7 and cccbd7d.

📒 Files selected for processing (6)
  • cognee/eval_framework/evaluation/run_evaluation_module.py (1 hunks)
  • cognee/eval_framework/metrics_dashboard.py (1 hunks)
  • cognee/eval_framework/modal_run_eval.py (1 hunks)
  • cognee/tests/unit/eval_framework/dashboard_test.py (1 hunks)
  • cognee/tests/unit/eval_framework/deepeval_adapter_test.py (1 hunks)
  • cognee/tests/unit/eval_framework/metrics_test.py (1 hunks)
✅ Files skipped from review due to trivial changes (2)
  • cognee/eval_framework/evaluation/run_evaluation_module.py
  • cognee/eval_framework/modal_run_eval.py
🚧 Files skipped from review as they are similar to previous changes (3)
  • cognee/tests/unit/eval_framework/dashboard_test.py
  • cognee/tests/unit/eval_framework/metrics_test.py
  • cognee/tests/unit/eval_framework/deepeval_adapter_test.py
⏰ Context from checks skipped due to timeout of 90000ms (25)
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: windows-latest
  • GitHub Check: Test on macos-13
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_simple_example_test
  • GitHub Check: Build Cognee Backend Docker App Image
  • GitHub Check: docker-compose-test

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
cognee/eval_framework/evaluation/run_evaluation_module.py (1)

5-5: Remove unused import.

create_dashboard is not referenced anywhere in this file and can be safely removed for clarity.

-from cognee.eval_framework.analysis.dashboard_generator import create_dashboard
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0d3959c and 64be044.

📒 Files selected for processing (2)
  • cognee/eval_framework/evaluation/run_evaluation_module.py (1 hunks)
  • cognee/eval_framework/run_eval.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cognee/eval_framework/run_eval.py
⏰ Context from checks skipped due to timeout of 90000ms (31)
  • GitHub Check: run_dynamic_steps_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_multimedia_example_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_eval_framework_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: run_notebook_test / test
  • GitHub Check: run_simple_example_test / test
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-15
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: Test on macos-13
  • GitHub Check: lint (ubuntu-latest, 3.11.x)
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: test
  • GitHub Check: Test on ubuntu-22.04
  • GitHub Check: run_networkx_metrics_test / test
  • GitHub Check: run_notebook_test / test
  • GitHub Check: test
  • GitHub Check: windows-latest
  • GitHub Check: test
  • GitHub Check: test
  • GitHub Check: docker-compose-test
  • GitHub Check: run_simple_example_test
  • GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
cognee/eval_framework/evaluation/run_evaluation_module.py (1)

3-3: Looks good.

No issues noted with the updated import path for EvaluationExecutor.

Copy link
Contributor

@soobrosa soobrosa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, maybe we should keep PRs more agile.

@hajdul88
Copy link
Collaborator Author

hajdul88 commented Mar 3, 2025

LGTM, maybe we should keep PRs more agile.

Agree. This was put on the side because the prio was different, so that's why we left this open for 2 weeks

@hajdul88 hajdul88 merged commit e3f3d49 into dev Mar 3, 2025
37 checks passed
@hajdul88 hajdul88 deleted the feature/cog-1312-integrating-evaluation-framework-into-dreamify branch March 3, 2025 18:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants