Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 6 additions & 5 deletions docs/templates.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,11 @@
!!! tip "cognee uses tasks grouped into pipelines to populate graph and vector stores"


Cognee uses tasks grouped into pipelines to populate graph and vector stores. These tasks are designed to analyze and enrich your data, improving the answers generated by Large Language Models (LLMs).
Cognee organizes tasks into pipelines that populate graph and vector stores. These tasks analyze and enrich data, enhancing the quality of answers produced by Large Language Models (LLMs).

This section provides a template to help you structure your data and build pipelines. \
These tasks serve as a starting point for using Cognee to create reliable LLM pipelines.

In this section, you'll find a template that you can use to structure your data and build pipelines.
These tasks are designed to help you get started with cognee and build reliable LLM pipelines



Expand All @@ -15,7 +16,7 @@ These tasks are designed to help you get started with cognee and build reliable

## Task 1: Category Extraction

Data enrichment is the process of enhancing raw data with additional information to make it more valuable. This template is a sample task that extract categories from a document and populates a graph with the extracted categories.
Data enrichment is the process of enhancing raw data with additional information to make it more valuable. This template is a sample task that extracts categories from a document and populates a graph with the extracted categories.

Let's go over the steps to use this template [full code provided here](https://github.com/topoteretes/cognee/blob/main/cognee/tasks/chunk_naive_llm_classifier/chunk_naive_llm_classifier.py):

Expand Down Expand Up @@ -239,4 +240,4 @@ for dataset in datasets:
if dataset_name in existing_datasets:
awaitables.append(run_cognify_pipeline(dataset))
return await asyncio.gather(*awaitables)
```
```