Skip to content

Commit 501d036

Browse files
Itz-AntaripaFLyLeaf-coder
authored andcommitted
AWS Bedrock Integration and spell checks (mem0ai#3124)
1 parent 2852daa commit 501d036

File tree

11 files changed

+162
-29
lines changed

11 files changed

+162
-29
lines changed

docs/changelog.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -496,7 +496,7 @@ mode: "wide"
496496

497497
<Update label="2025-04-28" description="v2.1.20">
498498
**Improvements:**
499-
- **Client:** Fixed `organizationId` and `projectId` being asssigned to default in `ping` method
499+
- **Client:** Fixed `organizationId` and `projectId` being assigned to default in `ping` method
500500
</Update>
501501

502502
<Update label="2025-04-22" description="v2.1.19">
@@ -555,7 +555,7 @@ mode: "wide"
555555

556556
<Update label="2025-03-29" description="v2.1.13">
557557
**Improvements:**
558-
- **Introuced `ping` method to check if API key is valid and populate org/project id**
558+
- **Introduced `ping` method to check if API key is valid and populate org/project id**
559559
</Update>
560560

561561
<Update label="2025-03-29" description="AI SDK v1.0.0">

docs/docs.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -261,6 +261,7 @@
261261
"integrations/livekit",
262262
"integrations/pipecat",
263263
"integrations/elevenlabs",
264+
"integrations/aws-bedrock",
264265
"integrations/flowise",
265266
"integrations/langchain-tools",
266267
"integrations/agentops",

docs/examples/ai_companion.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ class Companion:
6161
check_prompt = f"""
6262
Analyze the given input and determine whether the user is primarily:
6363
1) Talking about themselves or asking for personal advice. They may use words like "I" for this.
64-
2) Inquiring about the AI companions's capabilities or characteristics They may use words like "you" for this.
64+
2) Inquiring about the AI companion's capabilities or characteristics They may use words like "you" for this.
6565
6666
Respond with a single word:
6767
- 'user' if the input is focused on the user

docs/examples/llama-index-mem0.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ agent = FunctionCallingAgent.from_tools(
8080
```
8181

8282
Start the chat.
83-
<Note> The agent will use the Mem0 to store the relavant memories from the chat. </Note>
83+
<Note> The agent will use the Mem0 to store the relevant memories from the chat. </Note>
8484

8585
Input
8686
```python
@@ -139,7 +139,7 @@ Added user message to memory: I am feeling hungry, order me something and send m
139139
=== LLM Response ===
140140
Please let me know your name and the dish you'd like to order, and I'll take care of it for you!
141141
```
142-
<Note> The agent is not able to remember the past prefernces that user shared in previous chats. </Note>
142+
<Note> The agent is not able to remember the past preferences that user shared in previous chats. </Note>
143143

144144
### Using the agent WITH memory
145145
Input
@@ -171,4 +171,4 @@ Emailing... David
171171
=== LLM Response ===
172172
I've ordered a pizza for you, and the bill has been sent to your email. Enjoy your meal! If there's anything else you need, feel free to let me know.
173173
```
174-
<Note> The agent is able to remember the past prefernces that user shared and use them to perform actions. </Note>
174+
<Note> The agent is able to remember the past preferences that user shared and use them to perform actions. </Note>

docs/examples/multimodal-demo.mdx

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -6,28 +6,28 @@ title: Multimodal Demo with Mem0
66

77
Enhance your AI interactions with **Mem0**'s multimodal capabilities. Mem0 now supports image understanding, allowing for richer context and more natural interactions across supported AI platforms.
88

9-
> 🎉 Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
9+
> Experience the power of multimodal AI! Test out Mem0's image understanding capabilities at [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai)
1010
11-
## 🚀 Features
11+
## Features
1212

13-
- **🖼️ Image Understanding**: Share and discuss images with AI assistants while maintaining context.
14-
- **🔍 Smart Visual Context**: Automatically capture and reference visual elements in conversations.
15-
- **🔗 Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
16-
- **📌 Cross-Session Recall**: Reference previously discussed visual content across different conversations.
17-
- **Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
13+
- **Image Understanding**: Share and discuss images with AI assistants while maintaining context.
14+
- **Smart Visual Context**: Automatically capture and reference visual elements in conversations.
15+
- **Cross-Modal Memory**: Link visual and textual information seamlessly in your memory layer.
16+
- **Cross-Session Recall**: Reference previously discussed visual content across different conversations.
17+
- **Seamless Integration**: Works naturally with existing chat interfaces for a smooth experience.
1818

19-
## 📖 How It Works
19+
## How It Works
2020

21-
1. **📂 Upload Visual Content**: Simply drag and drop or paste images into your conversations.
22-
2. **💬 Natural Interaction**: Discuss the visual content naturally with AI assistants.
23-
3. **📚 Memory Integration**: Visual context is automatically stored and linked with your conversation history.
24-
4. **🔄 Persistent Recall**: Retrieve and reference past visual content effortlessly.
21+
1. **Upload Visual Content**: Simply drag and drop or paste images into your conversations.
22+
2. **Natural Interaction**: Discuss the visual content naturally with AI assistants.
23+
3. **Memory Integration**: Visual context is automatically stored and linked with your conversation history.
24+
4. **Persistent Recall**: Retrieve and reference past visual content effortlessly.
2525

2626
## Demo Video
2727

2828
<iframe width="700" height="400" src="https://www.youtube.com/embed/2Md5AEFVpmg?si=rXXupn6CiDUPJsi3" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
2929

30-
## 🔥 Try It Out
30+
## Try It Out
3131

3232
Visit [multimodal-demo.mem0.ai](https://multimodal-demo.mem0.ai) to experience Mem0's multimodal capabilities firsthand. Upload images and see how Mem0 understands and remembers visual context across your conversations.
3333

docs/faqs.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ iconType: "solid"
1414

1515
When an AI agent or LLM needs to access memories, it employs the `search` method. Mem0 conducts a comprehensive search across these data stores, retrieving relevant information from each.
1616

17-
The retrieved memories can be seamlessly integrated into the LLM's prompt as required, enhancing the personalization and relevance of responses.
17+
The retrieved memories can be seamlessly integrated into the system prompt as required, enhancing the personalization and relevance of responses.
1818
</Accordion>
1919

2020
<Accordion title="What are the key features of Mem0?">
@@ -23,7 +23,7 @@ iconType: "solid"
2323
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
2424
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
2525
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
26-
- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
26+
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
2727
</Accordion>
2828

2929
<Accordion title="How Mem0 is different from traditional RAG?">

docs/features.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ iconType: "solid"
1313
- **Developer-Friendly API**: Offers a straightforward API for seamless integration into various applications.
1414
- **Platform Consistency**: Ensures consistent behavior and data across different platforms and devices.
1515
- **Managed Service**: Provides a hosted solution for easy deployment and maintenance.
16-
- **Save Costs**: Saves costs by adding relevent memories instead of complete transcripts to context window
16+
- **Save Costs**: Saves costs by adding relevant memories instead of complete transcripts to context window
1717

1818

1919

docs/integrations/agno.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -7,10 +7,10 @@ Integrate [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.
77

88
## Overview
99

10-
1. 🧠 Store and retrieve memories from Mem0 within Agno agents
11-
2. 🖼️ Support for multimodal interactions (text and images)
12-
3. 🔍 Semantic search for relevant past conversations
13-
4. 🌐 Personalized responses based on user history
10+
1. Store and retrieve memories from Mem0 within Agno agents
11+
2. Support for multimodal interactions (text and images)
12+
3. Semantic search for relevant past conversations
13+
4. Personalized responses based on user history
1414

1515
## Prerequisites
1616

docs/integrations/aws-bedrock.mdx

Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
---
2+
title: AWS Bedrock
3+
---
4+
5+
<Snippet file="security-compliance.mdx" />
6+
7+
This integration demonstrates how to use **Mem0** with **AWS Bedrock** and **Amazon OpenSearch Service (AOSS)** to enable persistent, semantic memory in intelligent agents.
8+
9+
## Overview
10+
11+
In this guide, you'll:
12+
13+
1. Configure AWS credentials to enable Bedrock and OpenSearch access
14+
2. Set up the Mem0 SDK to use Bedrock for embeddings and LLM
15+
3. Store and retrieve memories using OpenSearch as a vector store
16+
4. Build memory-aware applications with scalable cloud infrastructure
17+
18+
## Prerequisites
19+
20+
- AWS account with access to:
21+
- Bedrock foundation models (e.g., Titan, Claude)
22+
- OpenSearch Service with a configured domain
23+
- Python 3.8+
24+
- Valid AWS credentials (via environment or IAM role)
25+
26+
## Setup and Installation
27+
28+
Install required packages:
29+
30+
```bash
31+
pip install mem0ai boto3 opensearch-py
32+
```
33+
34+
Set environment variables:
35+
36+
Be sure to configure your AWS credentials using environment variables, IAM roles, or the AWS CLI.
37+
38+
```python
39+
import os
40+
41+
os.environ['AWS_REGION'] = 'us-west-2'
42+
os.environ['AWS_ACCESS_KEY_ID'] = 'AKIA...'
43+
os.environ['AWS_SECRET_ACCESS_KEY'] = 'AS...'
44+
```
45+
46+
## Initialize Mem0 Integration
47+
48+
Import necessary modules and configure Mem0:
49+
50+
```python
51+
import boto3
52+
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
53+
from mem0.memory.main import Memory
54+
55+
region = 'us-west-2'
56+
service = 'aoss'
57+
credentials = boto3.Session().get_credentials()
58+
auth = AWSV4SignerAuth(credentials, region, service)
59+
60+
config = {
61+
"embedder": {
62+
"provider": "aws_bedrock",
63+
"config": {
64+
"model": "amazon.titan-embed-text-v2:0"
65+
}
66+
},
67+
"llm": {
68+
"provider": "aws_bedrock",
69+
"config": {
70+
"model": "anthropic.claude-3-5-haiku-20241022-v1:0",
71+
"temperature": 0.1,
72+
"max_tokens": 2000
73+
}
74+
},
75+
"vector_store": {
76+
"provider": "opensearch",
77+
"config": {
78+
"collection_name": "mem0",
79+
"host": "your-opensearch-domain.us-west-2.es.amazonaws.com",
80+
"port": 443,
81+
"http_auth": auth,
82+
"embedding_model_dims": 1024,
83+
"connection_class": RequestsHttpConnection,
84+
"pool_maxsize": 20,
85+
"use_ssl": True,
86+
"verify_certs": True
87+
}
88+
}
89+
}
90+
91+
# Initialize memory system
92+
m = Memory.from_config(config)
93+
```
94+
95+
## Memory Operations
96+
97+
Use Mem0 with your Bedrock-powered LLM and OpenSearch storage backend:
98+
99+
```python
100+
# Store conversational context
101+
messages = [
102+
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
103+
{"role": "assistant", "content": "How about a thriller?"},
104+
{"role": "user", "content": "I prefer sci-fi."},
105+
{"role": "assistant", "content": "Noted! I'll suggest sci-fi movies next time."}
106+
]
107+
108+
m.add(messages, user_id="alice", metadata={"category": "movie_recommendations"})
109+
110+
# Search for memory
111+
relevant = m.search("What kind of movies does Alice like?", user_id="alice")
112+
113+
# Retrieve all user memories
114+
all_memories = m.get_all(user_id="alice")
115+
```
116+
117+
## Key Features
118+
119+
1. **Serverless Memory Embeddings**: Use Titan or other Bedrock models for fast, cloud-native embeddings
120+
2. **Scalable Vector Search**: Store and retrieve vectorized memories via OpenSearch
121+
3. **Seamless AWS Auth**: Uses AWS IAM or environment variables to securely authenticate
122+
4. **User-specific Memory Spaces**: Memories are isolated per user ID
123+
5. **Persistent Memory Context**: Maintain and recall history across sessions
124+
125+
## Help
126+
127+
- [AWS Bedrock Documentation](https://docs.aws.amazon.com/bedrock/)
128+
- [Amazon OpenSearch Service Docs](https://docs.aws.amazon.com/opensearch-service/)
129+
- [Mem0 Platform](https://app.mem0.ai)
130+
131+
<Snippet file="get-help.mdx" />
132+

docs/integrations/langchain.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,11 +64,11 @@ Create functions to handle context retrieval, response generation, and addition
6464
def retrieve_context(query: str, user_id: str) -> List[Dict]:
6565
"""Retrieve relevant context from Mem0"""
6666
memories = mem0.search(query, user_id=user_id)
67-
seralized_memories = ' '.join([mem["memory"] for mem in memories])
67+
serialized_memories = ' '.join([mem["memory"] for mem in memories])
6868
context = [
6969
{
7070
"role": "system",
71-
"content": f"Relevant information: {seralized_memories}"
71+
"content": f"Relevant information: {serialized_memories}"
7272
},
7373
{
7474
"role": "user",

0 commit comments

Comments
 (0)