-
Notifications
You must be signed in to change notification settings - Fork 1k
Feature: add graph creation prompt #686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: add graph creation prompt #686
Conversation
…prompt-hyperparameter-arxiv # Conflicts: # cognee/api/v1/cognify/cognify.py
…prompt-hyperparameter-arxiv
Please make sure all the checkboxes are checked:
|
WalkthroughThis pull request makes formatting and import improvements across several Python modules, notably introducing the Changes
Sequence Diagram(s)sequenceDiagram
participant C as Client
participant D as DeepEvalAdapter.evaluate_answers
participant A as Answer Dict
C->>D: Call evaluate_answers(answer)
D->>A: Check for "golden_context" key
alt golden_context exists
D->>A: Assign retrieval_context = [answer["retrieval_context"]]
else Golden context absent
D->>A: Assign retrieval_context = None
end
D->>C: Return evaluated result
sequenceDiagram
participant C as Client
participant E as extract_content_graph Function
participant L as LLMConfig (via get_llm_config)
participant R as render_prompt
C->>E: Call extract_content_graph(content)
E->>L: Retrieve LLM configuration (graph_prompt_path)
L-->>E: Return prompt path
E->>R: Call render_prompt(prompt_path)
R-->>E: Return rendered system prompt
E->>C: Return extracted content graph
Suggested reviewers
Poem
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
cognee/eval_framework/corpus_builder/corpus_builder_executor.py (1)
42-69: 💡 Verification agent🧩 Analysis chain
Verify task_getter implementation is updated
Ensure that the implementation of the
task_getterfunction (which this code calls) has been updated to handle the newgraph_prompt_pathparameter correctly.
🏁 Script executed:
#!/bin/bash # Find the task_getter implementations that might need updating echo "Searching for task_getter implementation and related functions..." rg "def get_default_tasks" -A 5 rg "task_getter" -A 5 -B 5Length of output: 12400
Action: Ensure All Task Getter Variants Properly Handle
graph_prompt_pathIt appears that
CorpusBuilderExecutor.build_corpusnow always calls the task getter with the newgraph_prompt_pathparameter:tasks = await self.task_getter( chunk_size=chunk_size, chunker=chunker, graph_prompt_path=graph_prompt_path )However, our search indicates that some task getter implementations—such as the default ones (e.g., in
get_default_tasks_by_indices.pyandget_default_tasksincognee/api/v1/cognify/cognify.py)—do not includegraph_prompt_pathin their signatures. Please verify that:
- If using the "CascadeGraph" task getter: Its implementation (likely in
get_cascade_graph_tasks.py) accepts and correctly applies thegraph_prompt_pathparameter.- For default or other task getter types: Either update their function signatures to accept the new parameter or explicitly ignore it (e.g., via
**kwargs) so that passinggraph_prompt_pathdoes not lead to runtime errors.Review the getter function selection in
cognee/eval_framework/corpus_builder/task_getters/TaskGetters.pyto ensure that every variant is compatible with this change.
🧹 Nitpick comments (4)
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt (1)
1-8: Style and Consistency ImprovementThe guidelines are well-defined, but the repeated use of "For" at the start of several bullet points (lines 3–6) may be perceived as repetitive. Consider omitting "For" to enhance readability and stylistic variety. For example, the lines could be revised as follows:
- - For yes/no questions: answer with "yes" or "no". - For what/who/where questions: reply with a single word or brief phrase. - For when questions: return only the relevant date/time. - For how/why questions: use the briefest phrase. + - Yes/no questions: answer with "yes" or "no". + - What/who/where questions: reply with a single word or brief phrase. + - When questions: return only the relevant date/time. + - How/why questions: use the briefest phrase.🧰 Tools
🪛 LanguageTool
[style] ~5-~5: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...y with a single word or brief phrase. - For when questions: return only the relevan...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~6-~6: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...: return only the relevant date/time. - For how/why questions: use the briefest phr...(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)
12-13: Consider extracting the default prompt path as a constant.The default prompt file is hardcoded. For better maintainability, consider extracting this value as a module-level constant.
+DEFAULT_GRAPH_PROMPT_PATH = "generate_graph_prompt.txt" + async def extract_content_graph( content: str, response_model: Type[BaseModel], graph_prompt_path: Optional[str] = None ): llm_client = get_llm_client() - prompt_path = graph_prompt_path or "generate_graph_prompt.txt" + prompt_path = graph_prompt_path or DEFAULT_GRAPH_PROMPT_PATH system_prompt = render_prompt(prompt_path, {})cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt (1)
1-151: Well-structured knowledge graph extraction prompt with comprehensive guidelines.This is a well-organized prompt with clear sections covering node guidelines, property formatting, relationship definitions, and output requirements. The one-shot examples for each section provide excellent guidance for the language model.
A minor grammar correction: on line 83, add a comma after the year in the date: "September 4, 1998, and has a market cap..."
-> **One-Shot Example**: -> **Input**: "Google was founded on September 4, 1998 and has a market cap of 800000000000." +> **One-Shot Example**: +> **Input**: "Google was founded on September 4, 1998, and has a market cap of 800000000000."🧰 Tools
🪛 LanguageTool
[grammar] ~30-~30: Did you mean the adjective “useful”?
Context: ...ived directly from the text. - Always use full, canonical names. - Do not use in...(THANK_FULL)
[style] ~83-~83: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...**: "Google was founded on September 4, 1998 and has a market cap of 800000000000." ...(MISSING_COMMA_AFTER_YEAR)
[style] ~127-~127: ‘absolutely essential’ might be wordy. Consider a shorter alternative.
Context: ...y edges (e.g., "X is a concept") unless absolutely essential. ### 4.3 Inferred Facts - Rule: On...(EN_WORDINESS_PREMIUM_ABSOLUTELY_ESSENTIAL)
[uncategorized] ~130-~130: Possible missing comma found.
Context: ...pported by the text, or those logically implied if they enhance clarity. - Do not a...(AI_HYDRA_LEO_MISSING_COMMA)
cognee/infrastructure/llm/prompts/generate_graph_prompt_strict.txt (1)
1-88: Well-structured prompt for knowledge graph extractionThis prompt provides clear, structured instructions for extracting knowledge graphs from unstructured text. It effectively defines entity types, relationship handling, and important constraints for creating consistent graphs.
However, consider the following enhancements:
- Include more examples of relationship types beyond "acted_in" and "founded_by"
- Add guidelines for handling relative time expressions (e.g., "last year")
- Provide instructions for handling hierarchical relationships (e.g., "is_a", "subclass_of")
- Consider addressing uncertainty expressions or negations in text
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
cognee/api/v1/cognify/cognify.py(2 hunks)cognee/eval_framework/corpus_builder/corpus_builder_executor.py(1 hunks)cognee/eval_framework/corpus_builder/run_corpus_builder.py(3 hunks)cognee/eval_framework/evaluation/deep_eval_adapter.py(1 hunks)cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt(1 hunks)cognee/infrastructure/llm/prompts/answer_simple_question_benchmark3.txt(1 hunks)cognee/infrastructure/llm/prompts/answer_simple_question_benchmark4.txt(1 hunks)cognee/infrastructure/llm/prompts/generate_graph_prompt_guided.txt(1 hunks)cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt(1 hunks)cognee/infrastructure/llm/prompts/generate_graph_prompt_simple.txt(1 hunks)cognee/infrastructure/llm/prompts/generate_graph_prompt_strict.txt(1 hunks)cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py(1 hunks)cognee/tasks/graph/extract_graph_from_data.py(2 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)
cognee/modules/chunking/TextChunker.py (1)
TextChunker(11-78)
🪛 LanguageTool
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark2.txt
[style] ~5-~5: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...y with a single word or brief phrase. - For when questions: return only the relevan...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
[style] ~6-~6: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...: return only the relevant date/time. - For how/why questions: use the briefest phr...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
cognee/infrastructure/llm/prompts/generate_graph_prompt_oneshot.txt
[grammar] ~30-~30: Did you mean the adjective “useful”?
Context: ...ived directly from the text. - Always use full, canonical names. - Do not use in...
(THANK_FULL)
[style] ~83-~83: Some style guides suggest that commas should set off the year in a month-day-year date.
Context: ...**: "Google was founded on September 4, 1998 and has a market cap of 800000000000." ...
(MISSING_COMMA_AFTER_YEAR)
[style] ~127-~127: ‘absolutely essential’ might be wordy. Consider a shorter alternative.
Context: ...y edges (e.g., "X is a concept") unless absolutely essential. ### 4.3 Inferred Facts - Rule: On...
(EN_WORDINESS_PREMIUM_ABSOLUTELY_ESSENTIAL)
[uncategorized] ~130-~130: Possible missing comma found.
Context: ...pported by the text, or those logically implied if they enhance clarity. - Do not a...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (35)
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: chromadb test
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: Test cognee server start
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_simple_example_test
- GitHub Check: lint (ubuntu-latest, 3.10.x)
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (20)
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark4.txt (1)
1-15: Clear and Comprehensive GuidelinesThe prompt outlines concise, step-by-step instructions in a logical order. The categorization of response types (minimalism, question-specific responses, formatting, and context-only) is well-defined, enhancing the clarity for the QA system. One minor suggestion is to verify that the phrasing and tone here remain consistently aligned with the other benchmark files in the project.
cognee/infrastructure/llm/prompts/answer_simple_question_benchmark3.txt (1)
1-9: Consistent and Concise PromptThe atomic response system instructions are clear, concise, and formatted in a way that mirrors the expectations of the project. The separation of different response types is intuitive, ensuring that the system can generate minimalistic answers as intended.
cognee/eval_framework/evaluation/deep_eval_adapter.py (1)
36-38: Improved retrieval context handling in evaluationThis change allows for more flexible evaluation by conditionally including retrieval context only when the golden context is available, which should prevent incorrect evaluations when no context is provided.
cognee/infrastructure/llm/prompts/generate_graph_prompt_simple.txt (1)
1-28: Well-structured simple graph extraction promptThis prompt provides clear and concise guidelines for knowledge graph construction with specific rules for node labeling, relationships, and data formats. The snake_case convention for edges and human-readable node IDs will promote consistency.
A few suggestions for potential improvements:
- Consider adding examples of correctly formatted graph outputs
- Include guidance on handling ambiguous entity references
cognee/infrastructure/llm/prompts/generate_graph_prompt_guided.txt (1)
1-78: Comprehensive guided approach to knowledge graph extractionThis detailed prompt provides excellent structure with clear sections covering all aspects of knowledge graph creation. The additional information about property formatting, relationship direction, and inferred facts will likely produce higher quality graphs compared to the simple prompt.
The examples provided for each guideline are particularly helpful, especially the visual representation of edge directionality in lines 61-62.
cognee/tasks/graph/extract_graph_from_data.py (4)
2-2: Updated imports to support optional typingThe addition of Optional to the imports is appropriate for the new parameter.
57-57: Added flexibility with optional graph prompt path parameterThis new parameter allows for customizing the graph extraction prompt, which integrates well with the newly added prompt files.
61-64: Updated extract_content_graph call to use custom promptThe function call now correctly passes the graph_prompt_path parameter to extract_content_graph, allowing for customization of the extraction process.
53-58:Details
✅ Verification successful
Verify the parameter is propagated throughout the dependency chain
While the parameter is correctly added and passed to extract_content_graph, let's verify that all dependent functions handle this parameter correctly.
🏁 Script executed:
#!/bin/bash # Check if extract_content_graph function accepts the graph_prompt_path parameter rg -A 5 "def extract_content_graph" --include "*.py"Length of output: 496
🏁 Script executed:
#!/bin/bash # Verify that extract_content_graph function accepts the graph_prompt_path parameter # by searching for its definition in python files using the correct glob flag. rg -A 10 "def extract_content_graph" --glob '*.py'Length of output: 1242
Parameter propagation verified in dependency chain
The
graph_prompt_pathparameter is correctly passed fromextract_graph_from_datato theextract_content_graphfunction. In the filecognee/modules/data/extraction/knowledge_graph/extract_content_graph.py, the parameter is defined as an optional argument and is used to establish the prompt path (defaulting to"generate_graph_prompt.txt"when not provided).No further changes are needed.
🧰 Tools
🪛 Ruff (0.8.2)
56-56: Do not perform function call
OntologyResolverin argument defaults; instead, perform the call within the function, or read the default from a module-level singleton variable(B008)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (3)
3-3: Clean import addition for Optional type.The addition of
Optionalfrom the typing module properly supports the new parameter in the function signature.
36-42: LGTM! Well-structured function signature update.The addition of the optional
graph_prompt_pathparameter with proper typing and default value maintains backward compatibility while enabling new functionality.
61-61: Parameter correctly propagated to build_corpus.The new parameter is properly passed to the downstream method, ensuring the graph prompt path is available throughout the corpus building process.
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (2)
1-1: Appropriate import addition for Optional type.The import of
Optionalfrom typing is correctly added to support the new parameter's type annotation.
7-9: Function signature updated appropriately.The
extract_content_graphfunction signature is properly updated to include the optionalgraph_prompt_pathparameter with correct typing.cognee/api/v1/cognify/cognify.py (2)
114-114: Parameter addition is clean and consistent.The addition of the optional
graph_prompt_pathparameter to theget_default_tasksfunction maintains the same pattern used for other optional parameters.
135-135: Correctly passes the parameter to the Task constructor.The parameter is properly propagated to the
extract_graph_from_datatask, enabling custom graph prompts throughout the pipeline.cognee/eval_framework/corpus_builder/corpus_builder_executor.py (4)
42-50: Parameter addition is consistent with designThe
graph_prompt_pathparameter has been correctly added with an appropriate Optional[str] type and default value of None, maintaining backward compatibility.
54-56: Parameter correctly forwarded to run_cogneeThe
graph_prompt_pathparameter is properly passed frombuild_corpustorun_cognee, ensuring consistent parameter propagation.
59-61: Method signature updated correctlyThe
run_cogneemethod signature has been properly updated to include the new parameter with the same type and default value.
67-69: Parameter correctly forwarded to task_getterThe
graph_prompt_pathparameter is properly passed to thetask_getterfunction, completing the parameter propagation chain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
cognee/tasks/graph/extract_graph_from_data.py (1)
2-2: Unused Import: Optional from typing
The importedOptionalis not used anywhere in the file. To clean up the code and improve maintainability, consider removingOptionalif it is not needed.🧰 Tools
🪛 Ruff (0.8.2)
2-2:
typing.Optionalimported but unusedRemove unused import:
typing.Optional(F401)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)
1-1: Remove unused importOptional.The
Optionalimport fromtypingis not being used in this file and should be removed.-from typing import Type, Optional +from typing import Type🧰 Tools
🪛 Ruff (0.8.2)
1-1:
typing.Optionalimported but unusedRemove unused import:
typing.Optional(F401)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cognee/infrastructure/llm/config.py(2 hunks)cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py(1 hunks)cognee/tasks/graph/extract_graph_from_data.py(1 hunks)
🧰 Additional context used
🧬 Code Definitions (1)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (1)
cognee/infrastructure/llm/config.py (1)
get_llm_config(92-93)
🪛 Ruff (0.8.2)
cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py
1-1: typing.Optional imported but unused
Remove unused import: typing.Optional
(F401)
cognee/tasks/graph/extract_graph_from_data.py
2-2: typing.Optional imported but unused
Remove unused import: typing.Optional
(F401)
⏰ Context from checks skipped due to timeout of 90000ms (33)
- GitHub Check: Test on macos-15
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: Test on macos-13
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: chromadb test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: windows-latest
- GitHub Check: docker-compose-test
- GitHub Check: Build Cognee Backend Docker App Image
- GitHub Check: run_simple_example_test
🔇 Additional comments (4)
cognee/infrastructure/llm/config.py (2)
18-18: The addition ofgraph_prompt_pathenhances configuration flexibility.Adding this new attribute to the
LLMConfigclass allows for customizable graph generation prompts, aligning well with the PR's objective of adding graph creation prompts.
87-87: Good implementation:graph_prompt_pathproperly included in the serialized config.The
to_dict()method has been correctly updated to include the new attribute, ensuring it's accessible when the configuration is converted to a dictionary format.cognee/modules/data/extraction/knowledge_graph/extract_content_graph.py (2)
5-5: LGTM! Import added for the configuration retrieval.This import allows the function to access the LLM configuration, which is needed to get the customized graph prompt path.
10-13: Well implemented dynamic prompt path integration.This change improves flexibility by retrieving the prompt path from the configuration instead of hardcoding it. The refactoring allows for different graph generation prompts to be used based on configuration, which supports the PR's objective of adding graph creation prompts to the project.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)
3-3: Remove unused importOptional.The
Optionaltype from thetypingmodule is imported but not used anywhere in this file. To maintain clean imports, please remove it.-from typing import List, Optional +from typing import List🧰 Tools
🪛 Ruff (0.8.2)
3-3:
typing.Optionalimported but unusedRemove unused import:
typing.Optional(F401)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py(2 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
cognee/eval_framework/corpus_builder/run_corpus_builder.py
3-3: typing.Optional imported but unused
Remove unused import: typing.Optional
(F401)
⏰ Context from checks skipped due to timeout of 90000ms (33)
- GitHub Check: run_simple_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: Test on macos-15
- GitHub Check: Test on macos-13
- GitHub Check: run_simple_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_dynamic_steps_example_test / test
- GitHub Check: Test on macos-15
- GitHub Check: run_notebook_test / test
- GitHub Check: run_eval_framework_test / test
- GitHub Check: run_multimedia_example_test / test
- GitHub Check: run_notebook_test / test
- GitHub Check: run_networkx_metrics_test / test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: test
- GitHub Check: Test on macos-13
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: test
- GitHub Check: chromadb test
- GitHub Check: Test on ubuntu-22.04
- GitHub Check: windows-latest
- GitHub Check: test
- GitHub Check: test
- GitHub Check: test
- GitHub Check: docker-compose-test
- GitHub Check: run_simple_example_test
- GitHub Check: Build Cognee Backend Docker App Image
🔇 Additional comments (1)
cognee/eval_framework/corpus_builder/run_corpus_builder.py (1)
36-41: LGTM! Good function signature formatting.The improved formatting with parameters on separate lines enhances readability, especially for functions with multiple parameters. This change follows good Python style guidelines.
Description
DCO Affirmation
I affirm that all code in every commit of this pull request conforms to the terms of the Topoteretes Developer Certificate of Origin.