-
Notifications
You must be signed in to change notification settings - Fork 960
fix: Fix cognee graphistry and llm configuration through code #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fixed issue where graphistry and llm configuration options could not be set through code. Updated README.md Fix #COG-472
WalkthroughThe pull request introduces changes to the Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (2)
cognee/api/v1/config/config.py (2)
74-76: Add docstring and validation for API key.Consider adding documentation and validation for the API key:
@staticmethod def set_llm_api_key(llm_api_key: str): + """Set the API key for the LLM provider. + + Args: + llm_api_key (str): The API key for authentication with the LLM provider. + + Raises: + ValueError: If the API key is empty or invalid. + """ + if not llm_api_key or not isinstance(llm_api_key, str): + raise ValueError("LLM API key must be a non-empty string") llm_config = get_llm_config() llm_config.llm_api_key = llm_api_key
158-159: Remove redundant dict.get() calls.Since the keys are already validated by the check on line 157, using dict.get() is unnecessary. Direct dictionary access would be clearer:
- base_config.graphistry_username = graphistry_config.get("username") - base_config.graphistry_password = graphistry_config.get("password") + base_config.graphistry_username = graphistry_config["username"] + base_config.graphistry_password = graphistry_config["password"]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (2)
README.md(1 hunks)cognee/api/v1/config/config.py(3 hunks)
🔇 Additional comments (2)
cognee/api/v1/config/config.py (1)
8-8: LGTM: Import follows existing pattern.
The addition of get_llm_config import aligns with the module's pattern of importing configuration getters.
README.md (1)
53-53: LGTM! The new configuration method improves encapsulation.
The change from direct assignment to using set_llm_api_key() is a good improvement as it:
- Provides better encapsulation of configuration settings
- Maintains consistency with other configuration methods (like
set_graphistry_config) - Makes the API more intuitive and maintainable
Let's verify the consistency of configuration methods in the codebase:
| def set_llm_config(config_dict: dict): | ||
| """ | ||
| Updates the llm config with values from config_dict. | ||
| """ | ||
| llm_config = get_llm_config() | ||
| for key, value in config_dict.items(): | ||
| if hasattr(llm_config, key): | ||
| object.__setattr__(llm_config, key, value) | ||
| else: | ||
| raise AttributeError(f"'{key}' is not a valid attribute of the config.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider reusing set_llm_config for individual setters.
The bulk update logic in set_llm_config could be reused by individual setters to maintain consistency in validation and error handling.
Example refactor for set_llm_api_key:
@staticmethod
def set_llm_api_key(llm_api_key: str):
- llm_config = get_llm_config()
- llm_config.llm_api_key = llm_api_key
+ config.set_llm_config({"llm_api_key": llm_api_key})Committable suggestion was skipped due to low confidence.
| def set_llm_provider(llm_provider: str): | ||
| llm_config = get_llm_config() | ||
| llm_config.llm_provider = llm_provider | ||
|
|
||
| @staticmethod | ||
| def llm_endpoint(llm_endpoint: str): | ||
| graph_config = get_graph_config() | ||
| graph_config.llm_endpoint = llm_endpoint | ||
| def set_llm_endpoint(llm_endpoint: str): | ||
| llm_config = get_llm_config() | ||
| llm_config.llm_endpoint = llm_endpoint | ||
|
|
||
| @staticmethod | ||
| def llm_model(llm_model: str): | ||
| graph_config = get_graph_config() | ||
| graph_config.llm_model = llm_model | ||
| def set_llm_model(llm_model: str): | ||
| llm_config = get_llm_config() | ||
| llm_config.llm_model = llm_model |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Breaking change not properly documented, needs attention
The method renaming from llm_provider, llm_endpoint, and llm_model to their set_ prefixed versions is a breaking change that needs documentation. While the configuration is well documented in docs/configuration.md and docs/local_models.md, these docs still show the old method names in their examples.
Please update:
docs/configuration.md: Update configuration examples to use newset_prefixed methodsdocs/local_models.md: Update configuration examples to use newset_prefixed methods
🔗 Analysis chain
LGTM: Method renaming improves clarity.
The renaming of methods to include 'set_' prefix better indicates their mutating nature and aligns with the class's naming conventions.
Let's verify if this breaking change is documented:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for documentation of the breaking changes
rg -i "breaking.?chang|upgrade.?guide|migration.?guide" README.md CHANGELOG.md docs/
Length of output: 139
Script:
#!/bin/bash
# Let's check if there are any documentation files in the repository
fd -e md -e rst -e txt
# Also check if there are any references to these methods in any file
rg -i "llm_(provider|endpoint|model)"
Length of output: 5406
Updated configurations docs to be up to date Docs #COG-472
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (3)
docs/configuration.md (3)
34-34: Add explanatory comments for the LLM provider configuration.While the example shows how to set the LLM provider, it would be more helpful to include a brief comment explaining what this configuration does and what other provider options are available.
+# Set the LLM provider - available options: 'ollama', 'custom' cognee.config.set_llm_provider('ollama')
56-56: Enhance Ollama configuration documentation.The configuration examples would benefit from:
- A note about endpoint validation (e.g., required format, protocol)
- A list of tested/supported model names
- Warning about potential connectivity issues
+# Ensure the endpoint is accessible and includes the API version cognee.config.set_llm_endpoint("http://localhost:11434/v1") +# Supported models: mistral:instruct, mixtral:latest, etc. cognee.config.set_llm_model("mistral:instruct")Also applies to: 62-63
Line range hint
1-1: Consider adding essential documentation sections.The documentation would benefit from additional sections:
- Configuration Precedence: Clarify whether code-based configuration overrides environment variables.
- Configuration Validation: Document any validation rules and potential error messages.
- Troubleshooting Guide: Common configuration issues and their solutions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (1)
docs/configuration.md(4 hunks)
🔇 Additional comments (2)
docs/configuration.md (2)
17-17: LGTM: Import statement follows existing patterns.
The new import for get_llm_config is consistent with the existing configuration getter imports.
21-21: LGTM: Print statement maintains consistency.
The addition of get_llm_config().to_dict() print statement maintains consistency with other configuration inspection examples.
|
|
||
| ```bash | ||
| cognee.config.llm_provider = 'custom' | ||
| cognee.config.set_llm_provider('custom') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete the custom provider configuration examples.
The documentation shows environment variables for endpoint, model, and API key, but lacks the corresponding code examples using the new method calls. Consider adding complete programmatic configuration examples:
cognee.config.set_llm_provider('custom')
+# Configure custom provider settings
+cognee.config.set_llm_model("mistralai/Mixtral-8x7B-Instruct-v0.1")
+cognee.config.set_llm_endpoint("https://api.endpoints.anyscale.com/v1")
+cognee.config.set_llm_api_key("your_api_key")Committable suggestion was skipped due to low confidence.
Fixed issue where graphistry and llm configuration options could not be set through code. Updated README.md
Fix #COG-472
Summary by CodeRabbit