-
Notifications
You must be signed in to change notification settings - Fork 1k
fix: Fix security issue #1966
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Fix security issue #1966
Conversation
…trievers to update the timestamp
…trievers to update the timestamp
…o delete-last-acessed
Delete last acessed
… in completion retriever and graph_completion retriever
feat: adding cleanup function and adding update_node_acess_timestamps…
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
<!-- .github/pull_request_template.md --> ## Description This PR changes the permission test in e2e tests to use pytest. Introduces: - fixtures for the environment setup - one eventloop for all pytest tests - mocking for acreate_structured_output answer generation (for search) - Asserts in permission test (before we use the example only) ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [x] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [x] **I have tested my changes thoroughly before submitting this PR** - [x] **This PR contains minimal changes necessary to address the issue/feature** - [x] My code follows the project's coding standards and style guidelines - [x] I have added tests that prove my fix is effective or that my feature works - [x] I have added necessary documentation (if applicable) - [x] All new and existing tests pass - [x] I have searched existing PRs to ensure this change hasn't been submitted already - [x] I have linked any relevant issues in the description - [x] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Entity model now includes description and metadata fields for richer entity information and indexing. * **Tests** * Expanded and restructured permission tests covering multi-tenant and role-based access flows; improved test scaffolding and stability. * E2E test workflow now runs pytest with verbose output and INFO logs. * **Bug Fixes** * Access-tracking updates now commit transactions so access timestamps persist. * **Chores** * General formatting, cleanup, and refactoring across modules and maintenance scripts. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- .github/pull_request_template.md --> ## Description This PR covers the higher level search.py logic with unit tests. As a part of the implementation we fully cover the following core logic: - search.py - get_search_type_tools (with all the core search types) - search - prepare_search_results contract (testing behavior from search.py interface) ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [x] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [x] **I have tested my changes thoroughly before submitting this PR** - [x] **This PR contains minimal changes necessary to address the issue/feature** - [x] My code follows the project's coding standards and style guidelines - [x] I have added tests that prove my fix is effective or that my feature works - [x] I have added necessary documentation (if applicable) - [x] All new and existing tests pass - [x] I have searched existing PRs to ensure this change hasn't been submitted already - [x] I have linked any relevant issues in the description - [x] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Tests** * Added comprehensive unit test coverage for search functionality, including search type tool selection, search operations, and result preparation workflows across multiple scenarios and edge cases. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- .github/pull_request_template.md --> ## Description <!-- Please provide a clear, human-generated description of the changes in this PR. DO NOT use AI-generated descriptions. We want to understand your thought process and reasoning. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [x] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [ ] **I have tested my changes thoroughly before submitting this PR** - [ ] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Documentation** * Deprecated legacy examples and added a migration guide mapping old paths to new locations * Added a comprehensive new-examples README detailing configurations, pipelines, demos, and migration notes * **New Features** * Added many runnable examples and demos: database configs, embedding/LLM setups, permissions and access-control, custom pipelines (organizational, product recommendation, code analysis, procurement), multimedia, visualization, temporal/ontology demos, and a local UI starter * **Chores** * Updated CI/test entrypoints to use the new-examples layout <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: lxobr <[email protected]>
<!-- .github/pull_request_template.md --> ## Description <!-- Please provide a clear, human-generated description of the changes in this PR. DO NOT use AI-generated descriptions. We want to understand your thought process and reasoning. --> - `map_vector_distances_to_graph_nodes` and `map_vector_distances_to_graph_edges` accept both single-query (flat list) and multi-query (nested list) inputs. - `query_list_length` controls the mode: omit it for single-query behavior, or provide it to enable multi-query mode with strict length validation and per-query results. - `vector_distance` on `Node` and `Edge` is now a list (one distance per query). Constructors set it to `None`, and `reset_distances` initializes it at the start of each search. - `Node.update_distance_for_query` and `Edge.update_distance_for_query` are the only methods that write to `vector_distance`. They ensure the list has enough elements and keep unmatched queries at the penalty value. - `triplet_distance_penalty` is the default distance value used everywhere. Unmatched nodes/edges and missing scores all use this same penalty for consistency. - `edges_by_distance_key` is an index mapping edge labels to matching edges. This lets us update all edges with the same label at once, instead of scanning the full edge list repeatedly. - `calculate_top_triplet_importances` returns `List[Edge]` for single-query mode and `List[List[Edge]]` for multi-query mode. ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [x] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [x] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [x] **I have tested my changes thoroughly before submitting this PR** - [x] **This PR contains minimal changes necessary to address the issue/feature** - [x] My code follows the project's coding standards and style guidelines - [x] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [x] All new and existing tests pass - [x] I have searched existing PRs to ensure this change hasn't been submitted already - [x] I have linked any relevant issues in the description - [x] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Multi-query support for mapping/scoring node and edge distances and a configurable triplet distance penalty. * Distance-keyed edge indexing for more accurate distance-to-edge matching. * **Refactor** * Vector distance metadata changed from scalars to per-query lists; added reset/normalization and per-query update flows. * Node/edge distance initialization now supports deferred/listed distances. * **Tests** * Updated and expanded tests for multi-query flows, list-based distances, edge-key handling, and related error cases. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…1949) <!-- .github/pull_request_template.md --> ## Description This PR adds support for structured outputs with llama cpp using litellm and instructor. It returns a Pydantic instance. Based on the github issue described [here](#1947). It features the following: - works for both local and server modes (OpenAI api compatible) - defaults to `JSON` mode (**not JSON schema mode, which is too rigid**) - uses existing patterns around logging & tenacity decorator consistent with other adapters - Respects max_completion_tokens / max_tokens ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> I used the script below to test it with the [Phi-3-mini-4k-instruct model](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf). This tests a basic structured data extraction and a more complex one locally, then verifies that data extraction works in server mode. There are instructors in the script on how to set up the models. If you are testing this on a mac, run `brew install llama.cpp` to get llama cpp working locally. If you don't have Apple silicon chips, you will need to alter the script or the configs to run this on GPU. ``` """ Comprehensive test script for LlamaCppAPIAdapter - Tests LOCAL and SERVER modes SETUP INSTRUCTIONS: =================== 1. Download a small model (pick ONE): # Phi-3-mini (2.3GB, recommended - best balance) wget https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf # OR TinyLlama (1.1GB, smallest but lower quality) wget https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/resolve/main/tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf 2. For SERVER mode tests, start a server: python -m llama_cpp.server --model ./Phi-3-mini-4k-instruct-q4.gguf --port 8080 --n_gpu_layers -1 """ import asyncio import os from pydantic import BaseModel from cognee.infrastructure.llm.structured_output_framework.litellm_instructor.llm.llama_cpp.adapter import ( LlamaCppAPIAdapter, ) class Person(BaseModel): """Simple test model for person extraction""" name: str age: int class EntityExtraction(BaseModel): """Test model for entity extraction""" entities: list[str] summary: str # Configuration - UPDATE THESE PATHS MODEL_PATHS = [ "./Phi-3-mini-4k-instruct-q4.gguf", "./tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf", ] def find_model() -> str: """Find the first available model file""" for path in MODEL_PATHS: if os.path.exists(path): return path return None async def test_local_mode(): """Test LOCAL mode (in-process, no server needed)""" print("=" * 70) print("Test 1: LOCAL MODE (In-Process)") print("=" * 70) model_path = find_model() if not model_path: print("❌ No model found! Download a model first:") print() return False print(f"Using model: {model_path}") try: adapter = LlamaCppAPIAdapter( name="LlamaCpp-Local", model_path=model_path, # Local mode parameter max_completion_tokens=4096, n_ctx=2048, n_gpu_layers=-1, # 0 for CPU, -1 for all GPU layers ) print(f"✓ Adapter initialized in {adapter.mode_type.upper()} mode") print(" Sending request...") result = await adapter.acreate_structured_output( text_input="John Smith is 30 years old", system_prompt="Extract the person's name and age.", response_model=Person, ) print(f"✅ Success!") print(f" Name: {result.name}") print(f" Age: {result.age}") print() return True except ImportError as e: print(f"❌ ImportError: {e}") print(" Install llama-cpp-python: pip install llama-cpp-python") print() return False except Exception as e: print(f"❌ Failed: {e}") print() return False async def test_server_mode(): """Test SERVER mode (localhost HTTP endpoint)""" print("=" * 70) print("Test 3: SERVER MODE (Localhost HTTP)") print("=" * 70) try: adapter = LlamaCppAPIAdapter( name="LlamaCpp-Server", endpoint="http://localhost:8080/v1", # Server mode parameter api_key="dummy", model="Phi-3-mini-4k-instruct-q4.gguf", max_completion_tokens=1024, chat_format="phi-3" ) print(f"✓ Adapter initialized in {adapter.mode_type.upper()} mode") print(f" Endpoint: {adapter.endpoint}") print(" Sending request...") result = await adapter.acreate_structured_output( text_input="Sarah Johnson is 25 years old", system_prompt="Extract the person's name and age.", response_model=Person, ) print(f"✅ Success!") print(f" Name: {result.name}") print(f" Age: {result.age}") print() return True except Exception as e: print(f"❌ Failed: {e}") print(" Make sure llama-cpp-python server is running on port 8080:") print(" python -m llama_cpp.server --model your-model.gguf --port 8080") print() return False async def test_entity_extraction_local(): """Test more complex extraction with local mode""" print("=" * 70) print("Test 2: Complex Entity Extraction (Local Mode)") print("=" * 70) model_path = find_model() if not model_path: print("❌ No model found!") print() return False try: adapter = LlamaCppAPIAdapter( name="LlamaCpp-Local", model_path=model_path, max_completion_tokens=1024, n_ctx=2048, n_gpu_layers=-1, ) print(f"✓ Adapter initialized") print(" Sending complex extraction request...") result = await adapter.acreate_structured_output( text_input="Natural language processing (NLP) is a subfield of artificial intelligence (AI) and computer science.", system_prompt="Extract all technical entities mentioned and provide a brief summary.", response_model=EntityExtraction, ) print(f"✅ Success!") print(f" Entities: {', '.join(result.entities)}") print(f" Summary: {result.summary}") print() return True except Exception as e: print(f"❌ Failed: {e}") print() return False async def main(): """Run all tests""" print("\n" + "🦙" * 35) print("Llama CPP Adapter - Comprehensive Test Suite") print("Testing LOCAL and SERVER modes") print("🦙" * 35 + "\n") results = {} # Test 1: Local mode (no server needed) print("=" * 70) print("PHASE 1: Testing LOCAL mode (in-process)") print("=" * 70) print() results["local_basic"] = await test_local_mode() results["local_complex"] = await test_entity_extraction_local() # Test 2: Server mode (requires server on 8080) print("\n" + "=" * 70) print("PHASE 2: Testing SERVER mode (requires server running)") print("=" * 70) print() results["server"] = await test_server_mode() # Summary print("\n" + "=" * 70) print("TEST SUMMARY") print("=" * 70) for test_name, passed in results.items(): status = "✅ PASSED" if passed else "❌ FAILED" print(f" {test_name:20s}: {status}") passed_count = sum(results.values()) total_count = len(results) print() print(f"Total: {passed_count}/{total_count} tests passed") if passed_count == total_count: print("\n🎉 All tests passed! The adapter is working correctly.") elif results.get("local_basic"): print("\n✓ Local mode works! Server/cloud tests need llama-cpp-python server running.") else: print("\n⚠️ Please check setup instructions at the top of this file.") if __name__ == "__main__": asyncio.run(main()) ``` **The following screenshots show the tests passing** <img width="622" height="149" alt="image" src="https://github.com/user-attachments/assets/9df02f66-39a9-488a-96a6-dc79b47e3001" /> Test 1 <img width="939" height="750" alt="image" src="https://github.com/user-attachments/assets/87759189-8fd2-450f-af7f-0364101a5690" /> Test 2 <img width="938" height="746" alt="image" src="https://github.com/user-attachments/assets/61e423c0-3d41-4fde-acaf-ae77c3463d66" /> Test 3 <img width="944" height="232" alt="image" src="https://github.com/user-attachments/assets/f7302777-2004-447c-a2fe-b12762241ba9" /> **note** I also tried to test it with the `TinyLlama-1.1B-Chat` model but such a small model is bad at producing structured JSON consistently. ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ X] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) see above ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [X] **I have tested my changes thoroughly before submitting this PR** - [X] **This PR contains minimal changes necessary to address the issue/feature** - [X] My code follows the project's coding standards and style guidelines - [X] I have added tests that prove my fix is effective or that my feature works - [X] I have added necessary documentation (if applicable) - [X] All new and existing tests pass - [X] I have searched existing PRs to ensure this change hasn't been submitted already - [X] I have linked any relevant issues in the description - [X] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Llama CPP integration supporting local (in-process) and server (OpenAI‑compatible) modes. * Selectable provider with configurable model path, context size, GPU layers, and chat format. * Asynchronous structured-output generation with rate limiting, retries/backoff, and debug logging. * **Chores** * Added llama-cpp-python dependency and bumped project version. * **Documentation** * CONTRIBUTING updated with a “Running Simple Example” walkthrough for local/server usage. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
<!-- .github/pull_request_template.md --> ## Description <!-- Please provide a clear, human-generated description of the changes in this PR. DO NOT use AI-generated descriptions. We want to understand your thought process and reasoning. --> ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [ ] **I have tested my changes thoroughly before submitting this PR** - [ ] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Two interactive tutorial notebooks added (Cognee Basics, Python Development) with runnable code and rich markdown; MarkdownPreview for rendered markdown; instance-aware notebook support and cloud proxy with API key handling; notebook CRUD (create, save, run, delete). * **Bug Fixes** * Improved authentication handling to treat 401/403 consistently. * **Improvements** * Auto-expanding text areas; better error propagation from dataset operations; migration to allow toggling deletability for legacy tutorial notebooks. * **Tests** * Expanded tests for tutorial creation and loading. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
feat(auth): make JWT token expiration configurable via environment variable- Add JWT_LIFETIME_SECONDS environment variable to configure token expiration - Set default expiration to3600 seconds (1 hour) for both API and client auth backends - Remove hardcoded expiration values in favor of environment-based configuration - Add documentation comments explaining the JWT strategy configuration feat(auth): make cookie domain configurable via environment variable - Add AUTH_TOKEN_COOKIE_DOMAIN environment variable to configure cookie domain - When not set or empty, cookie domain defaults to None allowing cross-domain usage - Add documentation explaining cookie expiration is handled by JWT strategy - Update default_transport to use environment-based cookie domainfeat(docker): add CORS_ALLOWED_ORIGINS environment variable - Add CORS_ALLOWED_ORIGINS environment variable with default value of '*' - Configure frontend to use NEXT_PUBLIC_BACKEND_API_URL environment variable - Set default backend API URL to http://localhost:8000 feat(docker): add restart policy to all services - Add restart: always policy to cognee, frontend, neo4j, chromadb, and postgres services - This ensures services automatically restart on failure or system reboot - Improves container reliability and uptime```
refactor(auth): remove redundant comments from JWT strategy configurationRemove duplicate comments that were explaining the JWT lifetime configuration in both API and client authentication backends. The code remains functionallyunchanged but comments are cleaned up for better maintainability. ```
fix(auth): add error handling for JWT lifetime configuration - Add try-catch block to handle invalid JWT_LIFETIME_SECONDS environment variable - Default to 360 seconds when environment variable is not a valid integer - Apply same fix to both API and client authentication backendsdocs(docker): add security warning for CORS configuration - Add comment warning about default CORS_ALLOWED_ORIGINS setting - Emphasize need to override wildcard with specific domains in production ```
<!-- .github/pull_request_template.md --> ## Description This PR addresses a runtime error where the application fails because ChromaDB is not installed. The error message `"ChromaDB is not installed. Please install it with 'pip install chromadb'"` occurs when attempting to use features that depend on ChromaDB. ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [ ] **I have tested my changes thoroughly before submitting this PR** - [ ] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Chores** * Updated dependency management to include chromadb in the build configuration. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
fix(embeddings): handle empty API key in LiteLLMEmbeddingEngine - Add conditional check for empty API key to prevent authentication errors- Set default API key to "EMPTY" when no valid key is provided- This ensures proper fallback behavior when API key is not configured ``` <!-- .github/pull_request_template.md --> ## Description This PR fixes an issue where the `LiteLLMEmbeddingEngine` throws an authentication error when the `EMBEDDING_API_KEY` environment variable is empty or not set. The error message indicated `"api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable"`. Log Error: 2025-12-23T11:36:58.220908 [error ] Error embedding text: litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable [LiteLLMEmbeddingEngine] **Root Cause**: When initializing the embedding engine, if the `api_key` parameter is an empty string, the underlying LiteLLM client doesn't treat it as "no key provided" but instead uses this empty string to make API requests, triggering authentication failure. **Solution**: Added a conditional check in the code that creates the `LiteLLMEmbeddingEngine` instance. If the `EMBEDDING_API_KEY` read from configuration is empty (`None` or empty string), we explicitly set the `api_key` parameter passed to the engine constructor to a non-empty placeholder string `"EMPTY"`. This aligns with LiteLLM's handling of optional authentication and prevents exceptions in scenarios where keys are not required or need to be obtained from other sources **How to Reproduce**: Configure the application with the following settings (as shown in the error log): EMBEDDING_PROVIDER="custom" EMBEDDING_MODEL="openai/Qwen/Qwen3-Embedding-xxx" EMBEDDING_ENDPOINT="xxxxx" EMBEDDING_API_VERSION="" EMBEDDING_DIMENSIONS=1024 EMBEDDING_MAX_TOKENS=16384 EMBEDDING_BATCH_SIZE=10 # If embedding key is not provided same key set for LLM_API_KEY will be used EMBEDDING_API_KEY="" ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [x] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [x] I have tested my changes thoroughly before submitting this PR - [x] This PR contains minimal changes necessary to address the issue/feature - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [x] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Bug Fixes** * Improved API key validation for the embedding service to properly handle blank or missing API keys, ensuring more reliable embedding generation and preventing potential service errors. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
…vice restart policies (#1956) <!-- .github/pull_request_template.md --> ## Description This PR introduces several configuration improvements to enhance the application's flexibility and reliability. The changes make JWT token expiration and cookie domain configurable via environment variables, improve CORS configuration, and add container restart policies for better uptime. **JWT Token Expiration Configuration:** - Added `JWT_LIFETIME_SECONDS` environment variable to configure JWT token expiration time - Set default expiration to 3600 seconds (1 hour) for both API and client authentication backends - Removed hardcoded expiration values in favor of environment-based configuration - Added documentation comments explaining the JWT strategy configuration **Cookie Domain Configuration:** - Added `AUTH_TOKEN_COOKIE_DOMAIN` environment variable to configure cookie domain - When not set or empty, cookie domain defaults to `None` allowing cross-domain usage - Added documentation explaining cookie expiration is handled by JWT strategy - Updated default_transport to use environment-based cookie domain **CORS Configuration Enhancement:** - Added `CORS_ALLOWED_ORIGINS` environment variable with default value of `'*'` - Configured frontend to use `NEXT_PUBLIC_BACKEND_API_URL` environment variable - Set default backend API URL to `http://localhost:8000` **Docker Service Reliability:** - Added `restart: always` policy to all services (cognee, frontend, neo4j, chromadb, and postgres) - This ensures services automatically restart on failure or system reboot - Improves container reliability and uptime in production and development environments ## Acceptance Criteria <!-- * Key requirements to the new feature or modification; * Proof that the changes work and meet the requirements; * Include instructions on how to verify the changes. Describe how to test it locally; * Proof that it's sufficiently tested. --> ## Type of Change <!-- Please check the relevant option --> - [x] Bug fix (non-breaking change that fixes an issue) - [x] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) - [ ] Documentation update - [ ] Code refactoring - [ ] Performance improvement - [ ] Other (please specify): ## Screenshots/Videos (if applicable) <!-- Add screenshots or videos to help explain your changes --> ## Pre-submission Checklist <!-- Please check all boxes that apply before submitting your PR --> - [x] **I have tested my changes thoroughly before submitting this PR** - [x] **This PR contains minimal changes necessary to address the issue/feature** - [ ] My code follows the project's coding standards and style guidelines - [ ] I have added tests that prove my fix is effective or that my feature works - [ ] I have added necessary documentation (if applicable) - [ ] All new and existing tests pass - [ ] I have searched existing PRs to ensure this change hasn't been submitted already - [ ] I have linked any relevant issues in the description - [ ] My commits have clear and descriptive messages ## DCO Affirmation I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin. <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **New Features** * Services now automatically restart on failure for improved reliability. * **Configuration** * Cookie domain for authentication is now configurable via environment variable, defaulting to None if not set. * JWT token lifetime is now configurable via environment variable, with a 3600-second default. * CORS allowed origins are now configurable with a default of all origins (*). * Frontend backend API URL is now configurable, defaulting to http://localhost:8000. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Please make sure all the checkboxes are checked:
|
|
Caution Review failedThe pull request is closed. WalkthroughComprehensive refactoring introducing database connection argument support, notebook tutorial system redesign, data labeling and access tracking, frontend cloud/local fetch strategies, LLM adapter unification, multi-query graph distance handling, and extensive test suite migration to pytest patterns. Includes new cleanup utilities, example configurations, and deployment configuration updates. Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing touches
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Disabled knowledge base sources:
⛔ Files ignored due to path filters (8)
📒 Files selected for processing (195)
Comment |
| needs: release-pypi-package | ||
| if: ${{ inputs.flavour == 'main' }} | ||
| runs-on: ubuntu-22.04 | ||
| steps: | ||
| - name: Trigger docs tests | ||
| run: | | ||
| curl -L -X POST \ | ||
| -H "Accept: application/vnd.github+json" \ | ||
| -H "Authorization: Bearer ${{ secrets.REPO_DISPATCH_PAT_TOKEN }}" \ | ||
| -H "X-GitHub-Api-Version: 2022-11-28" \ | ||
| https://api.github.com/repos/topoteretes/cognee-docs/dispatches \ | ||
| -d '{"event_type":"new-main-release","client_payload":{"caller_repo":"'"${GITHUB_REPOSITORY}"'"}}' | ||
| trigger-community-test-suite: |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
To fix this, explicitly limit the GITHUB_TOKEN permissions so the job does not inherit broad repository defaults. The minimal, safe change is to add a permissions block. Since only some jobs already define permissions (release-pypi-package, release-docker-image), and the flagged job trigger-docs-test-suite (and also trigger-community-test-suite) do not need any GitHub API access via GITHUB_TOKEN, we can:
- Add a workflow‑level
permissions: contents: readso the default for all jobs is read‑only. - Optionally, if you want to be extra strict, you can set
permissions: {}on the two trigger jobs, but that’s not required if they don’t use GITHUB_TOKEN at all.
The single best fix with minimal functional impact is adding a root‑level permissions block right under name: release.yml. This ensures:
- Jobs without their own
permissions(including the one at line 141) use read‑only permissions. - Existing job‑specific
permissionsblocks stay unchanged and continue to override the default.
No additional methods, imports, or definitions are needed; only the YAML header must be updated.
-
Copy modified lines R2-R3
| @@ -1,4 +1,6 @@ | ||
| name: release.yml | ||
| permissions: | ||
| contents: read | ||
| on: | ||
| workflow_dispatch: | ||
| inputs: |
| needs: release-pypi-package | ||
| if: ${{ inputs.flavour == 'main' }} | ||
| runs-on: ubuntu-22.04 | ||
| steps: | ||
| - name: Trigger community tests | ||
| run: | | ||
| curl -L -X POST \ | ||
| -H "Accept: application/vnd.github+json" \ | ||
| -H "Authorization: Bearer ${{ secrets.REPO_DISPATCH_PAT_TOKEN }}" \ | ||
| -H "X-GitHub-Api-Version: 2022-11-28" \ | ||
| https://api.github.com/repos/topoteretes/cognee-community/dispatches \ | ||
| -d '{"event_type":"new-main-release","client_payload":{"caller_repo":"'"${GITHUB_REPOSITORY}"'"}}' |
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 4 days ago
In general, the fix is to explicitly define a permissions block for each job that currently relies on default GITHUB_TOKEN permissions, granting only the minimal scope needed. For the two jobs that just run curl with a separate PAT, they do not need write permissions from GITHUB_TOKEN and can safely run with read-only or even no explicit content write scopes.
Concretely, in .github/workflows/release.yml, add a permissions block to both trigger-docs-test-suite and trigger-community-test-suite jobs. They do not appear to perform any repository modifications; they only send repository dispatch events to other repositories using secrets.REPO_DISPATCH_PAT_TOKEN. Thus, setting permissions: contents: read (or even permissions: {}) is sufficient to satisfy CodeQL’s requirement to restrict GITHUB_TOKEN. To keep consistency with the rest of the workflow (other jobs use contents: read or contents: write), we can set contents: read on these jobs. No additional imports, methods, or definitions are needed—only YAML changes within this workflow file.
-
Copy modified lines R144-R145 -
Copy modified lines R160-R161
| @@ -141,6 +141,8 @@ | ||
| needs: release-pypi-package | ||
| if: ${{ inputs.flavour == 'main' }} | ||
| runs-on: ubuntu-22.04 | ||
| permissions: | ||
| contents: read | ||
| steps: | ||
| - name: Trigger docs tests | ||
| run: | | ||
| @@ -155,6 +157,8 @@ | ||
| needs: release-pypi-package | ||
| if: ${{ inputs.flavour == 'main' }} | ||
| runs-on: ubuntu-22.04 | ||
| permissions: | ||
| contents: read | ||
| steps: | ||
| - name: Trigger community tests | ||
| run: | |
|
| GitGuardian id | GitGuardian status | Secret | Commit | Filename | |
|---|---|---|---|---|---|
| 17116131 | Triggered | Generic Password | 5f8a3e2 | new-examples/configurations/database_examples/neo4j_graph_database_configuration.py | View secret |
🛠 Guidelines to remediate hardcoded secrets
- Understand the implications of revoking this secret by investigating where it is used in your code.
- Replace and store your secret safely. Learn here the best practices.
- Revoke and rotate this secret.
- If possible, rewrite git history. Rewriting git history is not a trivial act. You might completely break other contributing developers' workflow and you risk accidentally deleting legitimate data.
To avoid such incidents in the future consider
- following these best practices for managing and storing secrets including API keys and other credentials
- install secret detection on pre-commit to catch secret before it leaves your machine and ease remediation.
🦉 GitGuardian detects secrets in your source code to help developers and security teams secure the modern development process. You are seeing this because you or someone else with access to this repository has authorized GitGuardian to scan your pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR is being reviewed by Cursor Bugbot
Details
You are on the Bugbot Free tier. On this plan, Bugbot will review limited PRs each billing cycle.
To receive Bugbot reviews on all of your PRs, visit the Cursor dashboard to activate Pro and start your 14-day free trial.
|
|
||
| # revision identifiers, used by Alembic. | ||
| revision: str = "a1b2c3d4e5f6" | ||
| down_revision: Union[str, None] = "46a6ce2bd2b2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alembic migration branch creates multiple heads
Both a1b2c3d4e5f6_add_label_column_to_data.py and 1a58b986e6e1_enable_delete_for_old_tutorial_notebooks.py have down_revision = "46a6ce2bd2b2", creating a branched migration history. This causes Alembic to detect multiple heads, which will fail migrations with "Multiple heads detected" error. One migration needs its down_revision updated to point to the other migration to create a linear chain.
Description
Security issue reported by the user
Acceptance Criteria
Type of Change
Screenshots/Videos (if applicable)
Pre-submission Checklist
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.
Note
cloudFetch,localFetch) and pass instance intouseNotebooks; update dashboard and datasets to support cloud/local.react-markdown(MarkdownPreview); refactorNotebookUI with memoized cells and preview/edit toggle.TextAreafor better performance; improvehandleServerErrorsto treat 401/403 and optional retry.next16.1,react19.2.3,@auth0/nextjs-auth04.14; addreact-markdowndeps.data.label(nullable) anddata.last_accessed(TZ DateTime; optional backfill viaENABLE_LAST_ACCESSED).pytest -v; add new tests: custom data label and S3 permissions example with Postgres service.search_db_tests.yml: add Python version matrix (3.10–3.13) across providers.repository_dispatchon main releases.chromadbextra;.env.templatesupportsDATABASE_CONNECT_ARGS.cognee-mcptest client now asserts zero failures instead of warning.Written by Cursor Bugbot for commit 34c6652. This will update automatically on new commits. Configure here.
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.