Skip to content

Conversation

@dexters1
Copy link
Collaborator

@dexters1 dexters1 commented Dec 17, 2024

Added ability to search by datasets for cognee users

Feature COG-912

Summary by CodeRabbit

  • New Features

    • Enhanced dataset handling in search functionality, allowing for flexible input of datasets.
    • Updated user permissions logic to retrieve document IDs based on specified datasets.
  • Bug Fixes

    • Improved handling of dataset-related operations to ensure accuracy in document retrieval.
  • Documentation

    • Added comments to clarify new dataset handling in the search function.
  • Tests

    • Introduced tests for dataset management and user permissions, ensuring functionality aligns with updates.

Added ability to search by datasets for cognee users

Feature COG-912
@dexters1 dexters1 self-assigned this Dec 17, 2024
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 17, 2024

Walkthrough

The pull request introduces modifications across several files in the Cognee project, focusing on enhancing dataset management, search functionality, and user permissions. The changes primarily involve updating function signatures to support more flexible dataset handling, modifying the get_document_ids_for_user method to filter documents by datasets, and adjusting type declarations in payload data transfer objects. The modifications aim to provide more granular control over document and dataset interactions while maintaining the existing error handling and logging mechanisms.

Changes

File Change Summary
cognee/api/v1/cognify/cognify_v2.py Commented out update_status_lock in run_cognify_pipeline, moved status check and logging outside lock context
cognee/api/v1/cognify/routers/get_cognify_router.py Updated graph_model type from Optional[BaseModel] to Optional[KnowledgeGraph], added KnowledgeGraph import
cognee/api/v1/search/search_v2.py Added datasets parameter to search function, enabling dataset-specific searching
cognee/modules/data/models/__init__.py Added import for DatasetData class
cognee/modules/users/permissions/methods/get_document_ids_for_user.py Added datasets parameter to filter document IDs by specific datasets
cognee/tests/test_pgvector.py Updated test cases with multiple dataset names, added user retrieval and dataset-based document ID tests

Possibly related PRs

  • feat: improve API request and response models and docs #154: The changes in the cognify and run_cognify_pipeline functions in the main PR may relate to the modifications in the API request and response models, particularly since both involve handling data processing and status updates.
  • Cog 337 llama index support #186: The changes in the add function in cognee/api/v1/add/add_v2.py could be relevant as they also involve asynchronous operations and data handling, similar to the modifications made in the run_cognify_pipeline function.
  • Structured code summarization #375: The introduction of structured code summarization in the summarize_code function may connect with the changes in the cognify function, as both involve processing and summarizing data in a structured manner.

Poem

🐰 In the realm of data's grand design,
Datasets dance, documents align,
With filters sharp and queries bright,
Our search now leaps to greater height!
A rabbit's code, both swift and keen,
Brings clarity to what's unseen. 🔍

Tip

CodeRabbit's docstrings feature is now available as part of our Early Access Program! Simply use the command @coderabbitai generate docstrings to have CodeRabbit automatically generate docstrings for your pull request. We would love to hear your feedback on Discord.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Beta)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

dexters1 and others added 5 commits December 17, 2024 11:52
Removed lock that prevented using multiple datasets in cognify

Fix COG-912
Added test to verify getting documents related to datasets intended for search

Test COG-912
Resolve issue with default value for graph model in cognify endpoint

Fix
@dexters1 dexters1 marked this pull request as ready for review December 17, 2024 13:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🔭 Outside diff range comments (1)
cognee/tests/test_pgvector.py (1)

Line range hint 92-96: Enhance search test coverage

While the current test verifies basic dataset filtering for CHUNKS search, consider:

  1. Adding dataset filtering tests for INSIGHTS and SUMMARIES search types
  2. Including negative test cases (e.g., searching in non-existent dataset)
  3. Testing with multiple datasets in the filter

Example additional test cases:

# Test search with non-existent dataset
search_results = await cognee.search(
    SearchType.CHUNKS,
    query_text=random_node_name,
    datasets=["non_existent_dataset"]
)
assert len(search_results) == 0, "Should return empty results for non-existent dataset"

# Test search with multiple datasets
search_results = await cognee.search(
    SearchType.CHUNKS,
    query_text=random_node_name,
    datasets=[dataset_name_1, dataset_name_2]
)
assert len(search_results) > 0, "Should return results from both datasets"
🧹 Nitpick comments (3)
cognee/modules/users/permissions/methods/get_document_ids_for_user.py (2)

24-24: Typo in variable name 'documnets_ids_in_dataset'

The variable documnets_ids_in_dataset contains a typo. It should be document_ids_in_dataset.

Apply this diff to fix the typo:

-documnets_ids_in_dataset = set()
+document_ids_in_dataset = set()

Make sure to update all references to this variable within the function.


46-46: Fix grammatical error in comment

There's a grammatical error in the comment. It should read: "If document is related to dataset, add it to return value"

Apply this diff to correct the comment:

-# If document is related to dataset added it to return value
+# If document is related to dataset, add it to return value
cognee/tests/test_pgvector.py (1)

48-49: Consider improving test data organization

The dataset setup is functional but could be better organized. Consider:

  1. Using constants for dataset names at the module level
  2. Moving test data to separate fixture files for better maintainability

The current implementation works well for testing multiple dataset functionality.

+ # At the top of the file
+ DATASET_NATURAL_LANGUAGE = "natural_language"
+ DATASET_QUANTUM = "quantum"

- dataset_name_1 = "natural_language"
- dataset_name_2 = "quantum"
+ dataset_name_1 = DATASET_NATURAL_LANGUAGE
+ dataset_name_2 = DATASET_QUANTUM

Also applies to: 54-54, 64-64, 66-66

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5b7c83f and 63c3dce.

📒 Files selected for processing (6)
  • cognee/api/v1/cognify/cognify_v2.py (1 hunks)
  • cognee/api/v1/cognify/routers/get_cognify_router.py (1 hunks)
  • cognee/api/v1/search/search_v2.py (3 hunks)
  • cognee/modules/data/models/__init__.py (1 hunks)
  • cognee/modules/users/permissions/methods/get_document_ids_for_user.py (2 hunks)
  • cognee/tests/test_pgvector.py (4 hunks)
🔇 Additional comments (2)
cognee/modules/data/models/__init__.py (1)

3-3: Addition of 'DatasetData' import is appropriate

Including DatasetData in the __init__.py file correctly exposes it for use in other parts of the application.

cognee/tests/test_pgvector.py (1)

7-7: LGTM: Import added for user permission testing

The import of get_default_user is appropriately placed and supports the new dataset permission testing functionality.

Comment on lines 23 to 49
if datasets:
documnets_ids_in_dataset = set()
# If datasets are specified filter out documents that aren't part of the specified datasets
for dataset in datasets:
# Find dataset id for dataset element
dataset_id = (await session.scalars(
select(Dataset.id)
.where(
Dataset.name == dataset,
Dataset.owner_id == user_id,
)
)).one_or_none()

# Check which documents are connected to this dataset
for document_id in document_ids:
data_id = (await session.scalars(
select(DatasetData.data_id)
.where(
DatasetData.dataset_id == dataset_id,
DatasetData.data_id == document_id,
)
)).one_or_none()

# If document is related to dataset added it to return value
if data_id:
documnets_ids_in_dataset.add(document_id)
return list(documnets_ids_in_dataset)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential performance issue due to multiple database queries inside loops

The current implementation executes database queries within nested loops, which can significantly degrade performance due to the large number of database calls.

Consider refactoring to reduce the number of queries by fetching data in bulk. Here's a suggested approach:

  • Retrieve all dataset IDs for the specified datasets in a single query.
  • Retrieve all document IDs associated with these datasets in a single query.
  • Intersect these document IDs with the user's document IDs.

Apply this refactor:

# Get dataset IDs for the specified datasets
dataset_ids = (await session.scalars(
    select(Dataset.id).where(
        Dataset.name.in_(datasets),
        Dataset.owner_id == user_id
    )
)).all()

# If no matching datasets are found, return an empty list
if not dataset_ids:
    return []

# Get all document IDs associated with these datasets
document_ids_in_datasets = (await session.scalars(
    select(DatasetData.data_id).where(
        DatasetData.dataset_id.in_(dataset_ids),
        DatasetData.data_id.in_(document_ids)
    )
)).all()

return list(document_ids_in_datasets)

This refactor minimizes the number of database queries and improves performance.

Comment on lines +25 to +29
async def search(query_type: SearchType, query_text: str, user: User = None,
datasets: Union[list[str], str, None] = None) -> list:
# We use lists from now on for datasets
if isinstance(datasets, str):
datasets = [datasets]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve type annotation and handling of 'datasets' parameter

The datasets parameter is annotated as Union[list[str], str, None]. To enhance type clarity and consistency, consider updating the type annotation and handling.

Apply this diff to improve the type annotation:

-from typing import Callable, Dict, Union
+from typing import Callable, Dict, Union, Optional, List

-async def search(query_type: SearchType, query_text: str, user: User = None,
-                 datasets: Union[list[str], str, None] = None) -> list:
+async def search(query_type: SearchType, query_text: str, user: User = None,
+                 datasets: Optional[Union[List[str], str]] = None) -> list:

     # We use lists from now on for datasets
     if isinstance(datasets, str):
         datasets = [datasets]

Alternatively, if you prefer to accept only lists:

-async def search(query_type: SearchType, query_text: str, user: User = None,
-                 datasets: Union[list[str], str, None] = None) -> list:
+async def search(query_type: SearchType, query_text: str, user: User = None,
+                 datasets: Optional[List[str]] = None) -> list:

     # Ensure datasets is a list
     if datasets is None:
         datasets = []

This approach simplifies type checking and makes the code more predictable.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines 70 to 81
# Test getting of documents for search per dataset
from cognee.modules.users.permissions.methods import get_document_ids_for_user
user = await get_default_user()
document_ids = await get_document_ids_for_user(user.id, [dataset_name_1])
assert len(document_ids) == 1, f"Number of expected documents doesn't match {len(document_ids)} != 1"

# Test getting of documents for search when no dataset is provided
from cognee.modules.users.permissions.methods import get_document_ids_for_user
user = await get_default_user()
document_ids = await get_document_ids_for_user(user.id)
assert len(document_ids) == 2, f"Number of expected documents doesn't match {len(document_ids)} != 2"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove duplicate import and improve test structure

The permission tests are comprehensive but have some issues:

  1. The get_document_ids_for_user import is duplicated (lines 71 and 77)
  2. The tests could be better structured as separate test functions
- # Test getting of documents for search per dataset
- from cognee.modules.users.permissions.methods import get_document_ids_for_user
- user = await get_default_user()
- document_ids = await get_document_ids_for_user(user.id, [dataset_name_1])
- assert len(document_ids) == 1, f"Number of expected documents doesn't match {len(document_ids)} != 1"

- # Test getting of documents for search when no dataset is provided
- from cognee.modules.users.permissions.methods import get_document_ids_for_user
- user = await get_default_user()
- document_ids = await get_document_ids_for_user(user.id)
- assert len(document_ids) == 2, f"Number of expected documents doesn't match {len(document_ids)} != 2"

+ async def test_get_documents_with_dataset_filter():
+     user = await get_default_user()
+     document_ids = await get_document_ids_for_user(user.id, [dataset_name_1])
+     assert len(document_ids) == 1, f"Number of expected documents doesn't match {len(document_ids)} != 1"

+ async def test_get_documents_without_dataset_filter():
+     user = await get_default_user()
+     document_ids = await get_document_ids_for_user(user.id)
+     assert len(document_ids) == 2, f"Number of expected documents doesn't match {len(document_ids)} != 2"

Committable suggestion skipped: line range outside the PR's diff.

Resolve typo in code

chore COG-912
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
cognee/tests/test_pgvector.py (1)

70-80: 🛠️ Refactor suggestion

Restructure permission tests and remove duplicate import

The permission tests have the same issues as previously noted:

  1. The get_document_ids_for_user import is duplicated
  2. The tests should be structured as separate test functions for better maintainability
-# Test getting of documents for search per dataset
-from cognee.modules.users.permissions.methods import get_document_ids_for_user
-user = await get_default_user()
-document_ids = await get_document_ids_for_user(user.id, [dataset_name_1])
-assert len(document_ids) == 1, f"Number of expected documents doesn't match {len(document_ids)} != 1"

-# Test getting of documents for search when no dataset is provided
-from cognee.modules.users.permissions.methods import get_document_ids_for_user
-user = await get_default_user()
-document_ids = await get_document_ids_for_user(user.id)
-assert len(document_ids) == 2, f"Number of expected documents doesn't match {len(document_ids)} != 2"

+async def test_get_documents_with_dataset_filter():
+    """Test document retrieval when filtered by dataset"""
+    user = await get_default_user()
+    document_ids = await get_document_ids_for_user(user.id, [dataset_name_1])
+    assert len(document_ids) == 1, f"Expected 1 document, got {len(document_ids)}"

+async def test_get_documents_without_dataset_filter():
+    """Test document retrieval without dataset filter"""
+    user = await get_default_user()
+    document_ids = await get_document_ids_for_user(user.id)
+    assert len(document_ids) == 2, f"Expected 2 documents, got {len(document_ids)}"
🧹 Nitpick comments (2)
cognee/tests/test_pgvector.py (2)

54-54: Add assertions to verify dataset operations

The dataset operations (add and cognify) lack assertions to verify their success. Consider adding checks to ensure the operations completed as expected.

 await cognee.add([explanation_file_path], dataset_name_1)
+assert await cognee.get_dataset_size(dataset_name_1) > 0, "Dataset 1 should not be empty"

 await cognee.add([text], dataset_name_2)
+assert await cognee.get_dataset_size(dataset_name_2) > 0, "Dataset 2 should not be empty"

 await cognee.cognify([dataset_name_2, dataset_name_1])
+assert await cognee.is_dataset_cognified(dataset_name_1), "Dataset 1 should be cognified"
+assert await cognee.is_dataset_cognified(dataset_name_2), "Dataset 2 should be cognified"

Also applies to: 64-64, 66-66


Line range hint 91-95: Enhance search test coverage

While the current test verifies basic search functionality with dataset filtering, consider adding:

  1. A test case for searching across multiple datasets
  2. More specific assertions about the search results content
 search_results = await cognee.search(SearchType.CHUNKS, query_text = random_node_name, datasets=[dataset_name_2])
-assert len(search_results) != 0, "The search results list is empty."
+assert len(search_results) != 0, "The search results list is empty."
+assert all("quantum" in result.lower() for result in search_results), "Search results should contain the query term"
+
+# Test search across multiple datasets
+multi_dataset_results = await cognee.search(
+    SearchType.CHUNKS,
+    query_text=random_node_name,
+    datasets=[dataset_name_1, dataset_name_2]
+)
+assert len(multi_dataset_results) >= len(search_results), "Multi-dataset search should return at least as many results"
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 63c3dce and 48825d0.

📒 Files selected for processing (2)
  • cognee/modules/users/permissions/methods/get_document_ids_for_user.py (2 hunks)
  • cognee/tests/test_pgvector.py (4 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • cognee/modules/users/permissions/methods/get_document_ids_for_user.py
🔇 Additional comments (1)
cognee/tests/test_pgvector.py (1)

7-7: LGTM: Clean import and clear dataset naming

The import is properly placed and the dataset names are descriptive and well-defined.

Also applies to: 48-49

@dexters1 dexters1 merged commit d639492 into dev Dec 17, 2024
32 of 40 checks passed
@dexters1 dexters1 deleted the COG-912-search-by-dataset branch December 17, 2024 13:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants