π― Learning Journey: Course 1 (Fundamentals) β Course 2 (Advanced MCP, Hooks, Session Management) β Course 3 (Multi-Agent Systems) β Course 4 (Production Deployment)
A comprehensive hands-on learning path for AI agent development using the Strands Agents framework. Build intelligent, multi-agent systems from basic concepts to production-ready implementations with advanced capabilities. All of these courses have free video courses to follow along available at Analytics Vidhya.
This repository contains four progressive courses that take you from fundamentals to advanced production-ready implementations:
Foundation course covering basic agent creation, model providers, AWS integration, MCP basics, agent-to-agent communication, and observability fundamentals.
Video Series available here for free enrollment.
Advanced course focusing on production-ready implementations, advanced tool integration, persistent memory systems, hooks, session management, and enterprise features.
Video Series available here for free enrollment.
Develop intelligent multi-agent systems that coordinate, communicate, and solve complex problems using swarm, graph-based and agents as tools patterns with Strands Agents.
Video Series available here for free enrollment.
Production deployment course covering best practices for running agents in production environments using Amazon Bedrock AgentCore Runtime for serverless scaling and management.
Total Learning Time: ~5-6 hours across all courses
Location: course-1/ directory
Learn the complete journey of AI agent development, from basic usage to advanced topics like agent-to-agent (A2A) communication and observability.
- Strands Agents Framework - Build intelligent AI agents
- Model Context Protocol (MCP) - Enable tool integration
- Agent-to-Agent Communication - Create multi-agent systems
- Observability & Evaluation - Monitor and improve agent performance
| π§ͺ Lab | π What You'll Learn | β±οΈ Time | π Level |
|---|---|---|---|
| Lab 1: Strands Agent Basics | Agent initialization, system prompts, HTTP tools | 15 min | |
| Lab 2: Model Providers | Anthropic & Amazon Bedrock integration | 18 min | |
| Lab 3: AWS Service Integration | AWS service tool usage (S3, DynamoDB) | 15 min | |
| Lab 4: MCP & Tools | Model Context Protocol, tool creation | 14 min | |
| Lab 5: A2A Communication | Multi-agent systems & communication | 11 min | |
| Lab 6: Observability | LangFuse, RAGAS, performance monitoring | 21 min |
Files: basic-use.py, http-tool-use.py, system-prompt-use.py
Learn the fundamentals of creating and using Strands agents:
- Basic agent initialization and usage
- System prompt customization
- HTTP tool integration
Files: anthropic-model-provider.py, anthropic-pet-breed-agent.py, bedrock-default-config.py, bedrock-detailed-config.py
Explore different model providers and configuration options:
- Anthropic Claude model integration
- Amazon Bedrock model configuration
Note: Some portions of this lab require a pre-existing AWS account for the 'generate_image' tool.
Files: aws-tool-use.py
Learn to integrate AWS services with your Strands agents:
- Using the
use_awstool - Examples with Amazon S3 and Amazon DynamoDB
Note: The code in this lab requires a pre-existing AWS account to properly utilize the 'use_aws' tool. An example Amazon DynamoDB Table is used to generate results when querying a table.
Files: mcp-and-tools.ipynb, mcp_calulator.py
Deep dive into the Model Context Protocol:
- MCP server creation
- Tool definition and usage
- Calculator and Weather agents examples
- Interactive Jupyter notebook tutorial
Files: a2a-communication.ipynb, run_a2a_system.py, employee_data.py, employee-agent.py, hr-agent.py
Build multi-agent systems with inter-agent communication:
- A2A communication patterns
- Employee/HR agent system example
- MCP server for data sharing
- REST API integration
Files: observability-with-langfuse-and-evaluation-with-ragas.ipynb, restaurant-data/
Monitor and evaluate agent performance:
- Restaurant recommendation agent example
- LangFuse integration for observability
- RAGAS evaluation framework
- Performance metrics and tracing
Location: course-2/ directory
A comprehensive advanced course for building production-ready AI agents using the Strands Agents SDK. This repository contains 6 progressive labs that teach advanced capabilities including tool integration, memory persistence, Model Context Protocol (MCP), and comprehensive observability.
- Strands Agents SDK - Advanced agent architecture and lifecycle management
- Model Context Protocol (MCP) - Standardized tool and service integration
- Multi-Provider Configuration - Amazon Bedrock, Anthropic, OpenAI, and Ollama
- Advanced Processing - Hooks, session management, and conversation strategies
- Memory Systems - Long-term persistent memory with FAISS, OpenSearch, and Mem0
- Enterprise Features - Observability, metrics analysis, and performance optimization
| π§ͺ Lab | π What You'll Learn | β±οΈ Time | π Level |
|---|---|---|---|
| Lab 1: Overview of Strands Agents | Fundamental agentic AI concepts, agent lifecycle | 13 min | |
| Lab 2: Model Providers | Multi-provider configuration, metrics analysis | 12 min | |
| Lab 3: Advanced Response Processing | Hooks, lifecycle management, async patterns | 14 min | |
| Lab 4: Tools & MCP Integration | Custom tools, MCP servers, self-extending agents | 19 min | |
| Lab 5: Session Management | Conversation strategies, state persistence | 11 min | |
| Lab 6: Memory Persistent Agents | Long-term memory, FAISS, OpenSearch, Mem0 | 15 min |
Files: first_agent.py
Learn fundamental agentic AI concepts and build your first Strands agent:
- Basic agent creation with default configuration (no API keys required)
- Core agent components and execution flow
- Agent result examination (message, metrics, state, stop reasons)
- Dynamic model configuration and system prompt modification
- Conversation history management and message clearing
Files: anthropic_model.py, bedrock_model.py, ollama_model.py, openai_model.py
Configure agents across multiple LLM providers for flexibility and cost optimization:
- Model architecture overview and provider-specific parameters
- Bedrock model setup with structured output capabilities
- Anthropic model configuration with thinking mode
- Ollama local deployment and OpenAI integration
- Metrics analysis and performance monitoring
Files: async_example.py, hook_example_1.py, hook_example_2.py
Implement custom logic to intercept and modify agent behavior at lifecycle points:
- Event-driven hook system and lifecycle management
- Before/after event handling and agent modifications
- Async iterators, callback handlers, and retry logic
- Tool hook examples and precision parameter setup
Files: mcp_integration.py, self_extending_example.py, tools/
Extend agent capabilities with custom tools and external service integration:
- Built-in tools from strands-agents-tools library
- Custom tool creation using @tool decorator
- MCP server configuration for AWS Documentation and Pricing
- Self-extending agents and meta tooling capabilities
- Proper error handling and security implementation
Files: session_example.py, verify_session.py
Manage conversation state and context effectively across interactions:
- Context window challenges and management strategies
- Three conversation manager approaches (Null, SlidingWindow, Summarizing)
- Session state persistence and user isolation
- File-based and Amazon S3 session storage options
Files: memory_example.py
Build agents with long-term memory capabilities across conversations:
- Memory backends integration (FAISS, OpenSearch, Mem0)
- Web search integration with DuckDuckGo
- Memory storage, retrieval, and relevance scoring
- Amazon Bedrock Knowledge Bases integration
- Retention policies and privacy controls
Location: Strands Samples
Develop intelligent multi-agent systems that coordinate, communicate, and solve complex problems using swarm, graph-based and agents as tools patterns with Strands Agents.
| π§ͺ Lab | π What You'll Learn | β±οΈ Time | π Level |
|---|---|---|---|
| Lab 1: Multi-Agent Systems with Swarm Intelligence | Use a Jupyter notebook to deep dive into the Swarm multi-agent pattern | 30 min | |
| Lab 2: Multi-Agent Systems with Agent Graph | Use a Jupyter notebook to deep dive into the Graph multi-agent pattern | 25 min | |
| Lab 3: Multi-Agent System with Agents as a Tools | Use a Jupyter notebook to deep dive into the Agents as Tools multi-agent pattern | 20 min |
Location: course-4/ directory
Learn to deploy production-ready AI agents using Amazon Bedrock AgentCore Runtime. This course focuses on serverless deployment, scaling, and management of agents in production environments.
- Production Best Practices - Understand differences between development and production agent deployment
- Amazon Bedrock AgentCore - Comprehensive overview of AgentCore services and components
- Serverless Deployment - Deploy agents with auto-scaling and session management
- Production Operations - Monitor, troubleshoot, and maintain production agent systems
| π§ͺ Lab | π What You'll Learn | β±οΈ Time | π Level |
|---|---|---|---|
| Lab 1: Operating Agents in Production | Production best practices, development vs production differences | 9 min | |
| Lab 2: Introduction to Amazon Bedrock AgentCore | Amazon Bedrock AgentCore fundamentals, service component overview | 12 min | |
| Lab 3: Building agents with Amazon Bedrock AgentCore | Hands-on deployment with AgentCore Runtime | 20 min |
Understand the best practices for running agents in a production setting and how that differs from local development.
Understand the fundamentals of Amazon Bedrock AgentCore and its components.
Files: my_agent.py, invoke_agent.py, requirements.txt
Hands-on deployment of a production-ready calculator agent:
- Agent creation with Strands Agents framework
- AgentCore Runtime deployment and configuration
- Testing deployed agents with session management
- Production invocation patterns and best practices
Note: This lab requires an AWS account with appropriate permissions and model access enabled in Amazon Bedrock console.
| π§ Technology | π― Purpose | π Documentation |
|---|---|---|
| Strands Agents | AI agent framework | Docs |
| Anthropic Claude | Alternative LLM provider | Docs |
| Amazon Bedrock | AWS managed LLM service | Docs |
| OpenAI | Alternative LLM provider | Docs |
| Ollama | Local model deployment | Docs |
| Model Context Protocol | Tool integration standard | Docs |
| LangFuse | Observability & tracing | Docs |
| RAGAS | Agent evaluation | Docs |
| Mem0 | Memory persistence | Docs |
| FAISS | Vector similarity search | Docs |
| OpenSearch | Search and analytics | Docs |
- Python 3.10+
- Virtual environment (recommended)
- API keys for at least one of:
- Anthropic Claude
- Amazon Bedrock
- For Lab 6: LangFuse account and API key
- For Labs 3, 5: AWS account with appropriate CLI configuration
- Completion of Course 1 (Labs 1-6) or equivalent knowledge
- Python 3.10+
- Virtual environment (recommended)
- Anthropic Claude API key (primary requirement) - Get from Anthropic Console
- Additional API keys for specific labs:
- Amazon Bedrock (for AWS integration labs)
- OpenAI (optional alternative)
- Mem0 (for Lab 6 memory persistence)
- Completion of Course 1-2 (Labs 1-6) or equivalent knowledge
- Python 3.10+
- Virtual environment (recommended)
- AWS account with Anthropic Claude 3.7 enabled on Amazon Bedrock
- AWS IAM role with permissions to use Amazon Bedrock
- Completion of Course 1-3 or equivalent knowledge
- AWS Account with appropriate permissions
- Python 3.10+
- AWS CLI configured with
aws configure - AWS Permissions: BedrockAgentCoreFullAccess policy
- Model Access: Anthropic Claude 3.5 Haiku enabled in Amazon Bedrock console
git clone https://github.com/aws-samples/sample-getting-started-with-strands-agents-course.git
cd sample-getting-started-with-strands-agents-course# Create virtual environment
python -m venv .venv
# Activate (Linux/Mac)
source .venv/bin/activate
# Activate (Windows)
.venv\Scripts\activate# For Course 1
pip install -r requirements.txt
# For Course 2
cd course-2
pip install -r requirements.txt
# For Course 4
cd course-4
pip install -r requirements.txtCreate a .env file in the root directory:
# Anthropic (recommended)
ANTHROPIC_API_KEY=your_anthropic_api_key
# AWS Bedrock (optional)
AWS_ACCESS_KEY_ID=your_aws_access_key
AWS_SECRET_ACCESS_KEY=your_aws_secret_key
AWS_DEFAULT_REGION=us-east-1
# LangFuse (for Lab 6)
LANGFUSE_PUBLIC_KEY=your_langfuse_public_key
LANGFUSE_SECRET_KEY=your_langfuse_secret_key
LANGFUSE_HOST=https://cloud.langfuse.comCopy .env.example to .env in the course-2/ directory:
# Required - Get from https://console.anthropic.com/
ANTHROPIC_API_KEY=sk-ant-your_key_here
# Optional - for specific labs only
AWS_ACCESS_KEY_ID=your_aws_key # For Lab 4 MCP integration
AWS_SECRET_ACCESS_KEY=your_aws_secret # For Lab 4 MCP integration
AWS_SESSION_TOKEN=your_aws_token # For Lab 4 MCP integration
OPENAI_API_KEY=your_openai_key # For Lab 2 model alternatives
MEM0_API_KEY=your_mem0_key # For Lab 6 memory persistenceLabs 1-3: Python Scripts
cd Lab1
python basic-use.pyLab 4: Interactive Notebook
cd Lab4
jupyter notebook mcp-and-tools.ipynbLab 5: Multi-Agent System
cd Lab5
jupyter notebook a2a-communication.ipynbLab 6: Observability
cd Lab6
jupyter notebook observability-with-langfuse-and-evaluation-with-ragas.ipynbLab 1: Agent Fundamentals (No API key required)
cd course-2/Lab1
python first_agent.pyLab 2: Model Providers
cd course-2/Lab2
python anthropic_model.py
python bedrock_model.pyLab 3: Hooks
cd course-2/Lab3
python hook_example_1.py
python async_example.pyLab 4: MCP Integration
cd course-2/Lab4
python mcp_integration.pyLab 5: Session Management
cd course-2/Lab5
python session_example.pyLab 6: Memory Agents
cd course-2/Lab6
python memory_example.pyLab 3: Production Deployment
cd course-4
python my_agent.pyDeploy to AgentCore Runtime
cd course-4
agentcore configure -e my_agent.py
agentcore launch
agentcore invoke '{"prompt": "What is 50 plus 30?"}'| Issue | Solution |
|---|---|
| API Key Issues | Ensure all required API keys are set in your .env file or environment |
| Port Conflicts | Labs use ports 8000-8002, ensure they're available |
| Import Errors | Run pip install -r requirements.txt to install all dependencies |
| MCP Server Issues | Allow time for MCP servers to start before connecting clients |
| AWS Permissions | Verify your AWS credentials have necessary permissions for S3/DynamoDB |
| Issue | Solution |
|---|---|
| API Key Issues | Ensure ANTHROPIC_API_KEY is set correctly (should start with sk-ant-) |
| Import Errors | Run pip install -r requirements.txt in course-2 directory |
| AWS Credentials | Only needed for Lab 4 MCP integration - configure AWS CLI or environment |
| MCP Servers | Allow time for MCP servers to initialize before agent connections in Lab 4 |
| Memory Backends | Mem0 API key only required for Lab 6 memory persistence |
| Issue | Solution |
|---|---|
| AWS Permissions | Ensure BedrockAgentCoreFullAccess policy is attached to your user/role |
| Model Access | Enable Anthropic Claude 3.7 Sonnet in Amazon Bedrock console |
| Issue | Solution |
|---|---|
| AWS Permissions | Ensure BedrockAgentCoreFullAccess policy is attached to your user/role |
| Model Access | Enable Anthropic Claude 3.5 Haiku in Amazon Bedrock console |
| AgentCore CLI | Run pip install bedrock-agentcore-starter-toolkit if agentcore command not found |
| Deployment Failures | Check CloudWatch logs at /aws/bedrock-agentcore/runtimes/{agent-id}-DEFAULT |
| Session Issues | Ensure session IDs are 33+ characters for proper session management |
- Strands Agents Documentation
- Model Context Protocol Specification
- Anthropic Claude API
- Amazon Bedrock User Guide
- What is Amazon Bedrock AgentCore?
- AgentCore Runtime How It Works
- AgentCore Memory Guide
- AgentCore Gateway Documentation
- Programmatic Agent Invocation
- Building with Amazon Bedrock Workshop
- LangChain Embeddings with Bedrock
- Strands Agents Samples Repository
- Introducing Strands Agents
- Open Protocols for Agent Interoperability - Part 3
- Strands Agents SDK Technical Deep Dive
See CONTRIBUTING for more information.
This library is licensed under the MIT-0 License. See the LICENSE file.