A beginner-friendly, source-available AI coding agent for learning how agents work by building and running one yourself.
Quick Start • Features • Architecture • Testing • Docs • Contributing
Translations: 中文 • 日本語 • 한국어 • Español • Français
OpenAgent is a beginner-first AI coding agent project for people who are curious how modern agents work. You can run it locally, read every line of it, and learn by changing real code instead of studying abstract diagrams.
You type a message like "Create a REST API with authentication", and the agent:
- Reads your codebase to understand the context
- Plans an approach (optionally in read-only plan mode)
- Writes code, runs commands, creates files using tools
- Verifies its own work before finishing
- Reports back with results — all streamed in real time
You: "Add user authentication with JWT tokens"
Agent: [thinking] Let me explore the codebase first...
[read_file] src/app.py — found the Flask app
[read_file] requirements.txt — no auth libraries yet
[bash] pip install PyJWT bcrypt
[write_file] src/auth.py — JWT token generation
[edit_file] src/app.py — added login/register routes
[bash] python -m pytest tests/ — all 12 tests pass
Done! I've added JWT authentication with login and
register endpoints. Here's what I created: ...
(Available before March 20, 2026)
- User UI: https://openagent.walden.chat
- Developer UI: https://openagent-dev.walden.chat
User UI
Developer UI
This monorepo contains the OpenAgent runtime stack and the repo-governance files needed to publish and maintain it as a source-available project.
- Runtime projects:
agent-api/,agent-cli/,agent-ui/,agent-user-ui/ - Repo operations:
.github/,docs/,README.md,CONTRIBUTING.md,LICENSE,SECURITY.md,CODE_OF_CONDUCT.md
Most AI agent projects are either too abstract for beginners or too closed to learn from properly. OpenAgent is:
- Readable — the core loop is ~30 lines. No frameworks, no magic.
- Educational — built for beginners who want to learn agent architecture by running it, tracing it, and changing it.
- Complete — web UI, terminal CLI, streaming, tools, memory, teams, plan mode.
- Documented — includes contributor guidance, security policy, translations, and component-level technical references.
- LLM-independent — the core loop targets a shared
LLMClientinterface instead of a single model vendor. - Extensible — add a new tool in 20 lines. Add or swap provider adapters without rewriting the loop.
- Python 3.11+ (3.14 recommended)
- Credentials for your chosen LLM provider or compatible endpoint
OpenAgent is also published on PyPI:
openagent-core— backend libraryopenagent-app— terminal CLI
If you only want the packaged CLI instead of a monorepo checkout:
pip install openagent-app
openagent# Clone your fork or local copy
git clone <your-fork-or-local-copy>
cd openagent
# Backend
cd agent-api
python -m venv .venv && source .venv/bin/activate
pip install -e .
cat > .env <<'EOF'
LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=your-key-here
EOF
uvicorn agent_service.main:app --reload
# Developer Frontend (new terminal)
cd /path/to/openagent/agent-ui
python3 -m http.server 3500
# Open http://localhost:3500# Same backend as above, then in a new terminal:
cd /path/to/openagent/agent-user-ui
python3 -m http.server 3501
# Open http://localhost:3501The User UI is a lighter, user-facing interface with a Forest Canopy light theme, activity indicators instead of raw tool blocks, and simplified approval dialogs. Both UIs connect to the same backend.
For deployed environments, both web UIs default to the current page origin as their API and WebSocket base. In practice, this means a reverse-proxied setup like https://your-ui.example.com can talk to the backend on the same host without setting localStorage.API_BASE_URL. For local development on localhost or 127.0.0.1, they still default to http://localhost:8000.
cd openagent
python -m venv .venv && source .venv/bin/activate
pip install -e agent-api -e agent-cli
openagentecho "Explain how binary search works" | openagent --no-approval| Feature | Description |
|---|---|
| Agentic loop | While-loop that streams LLM responses, executes tools, and repeats until done |
| 15+ built-in tools | Bash, file read/write/edit, think, compact, skills, tasks, background commands |
| Streaming | Real-time token-by-token output via WebSocket |
| Tool approval | Optional human-in-the-loop confirmation before dangerous operations |
| Plan mode | Read-only exploration phase — agent designs a plan before making changes |
| Agent-initiated planning | Agent autonomously enters plan mode for complex tasks |
| Sub-agents | Spawn focused child agents (explore, code, plan, research) for subtasks |
| Agent teams | Multiple named agents working in parallel with async message passing |
| Feature | Description |
|---|---|
| 3-layer compaction | Micro-compact, auto-compact with transcripts, manual compact tool |
| Persistent memory | Agent remembers your preferences across sessions |
| Self-verification | Uses think tool to check its own work before finishing |
| Wrap-up nudging | Hints to finish when approaching turn limits |
| Truncation recovery | Auto-continues when response hits token limit |
| Feature | Description |
|---|---|
| Developer UI | Dark-themed chat interface with markdown, syntax highlighting, file browser, dev panel |
| User UI | Light-themed (Forest Canopy) user-facing interface with activity indicators, simplified dialogs |
| Terminal CLI | Rich REPL with history, autocomplete, vi mode, session persistence |
| Dev panel | Raw WebSocket frame inspector in the browser |
| LLM tracing | See exact prompts and responses sent to the model |
| Presets | Swappable system prompt personas (coding, office productivity, etc.) |
| Skills | On-demand expert knowledge (API design, Docker, PDF generation, etc.) |
┌──────────────┐ ┌──────────────────┐ ┌──────────────┐
│ agent-ui │ │ agent-user-ui │ │ agent-cli │
│ (Developer) │ │ (User) │ │ (Terminal) │
│ port 3500 │ │ port 3501 │ │ │
└──────┬───────┘ └────────┬─────────┘ └──────┬───────┘
│ WebSocket │ WebSocket │ Direct call
└──────────┬────────┘ │
└─────────────┬───────────────┘
▼
┌─────────────────┐
│ agent-api │
│ (FastAPI) │
├─────────────────┤
│ Agent Loop │ ◄── while not done: stream → tools → repeat
├─────────────────┤
│ Tool Registry │ ◄── bash, files, think, plan_mode, compact, ...
├─────────────────┤
│ LLM Client │ ◄── provider-independent adapter boundary
└────────┬────────┘
▼
┌────────────┐
│ LLM Provider │ (any supported or compatible backend)
└────────────┘
More backend architecture detail lives in agent-api/README.md and agent-api/CLAUDE.md.
openagent/
├── agent-api/ # FastAPI backend + agent logic
│ ├── src/agent_service/
│ │ ├── main.py # App entrypoint
│ │ ├── agent/loop.py # Core agentic loop (~1200 lines)
│ │ ├── agent/llm.py # Provider-agnostic LLM abstraction
│ │ ├── agent/tools/ # All tool implementations
│ │ └── api/websocket.py # WebSocket streaming handler
│ ├── skills/ # SKILL.md expert knowledge files
│ ├── prompts/ # PROMPT.md system prompt presets
│ └── tests/ # Backend test suite
├── agent-cli/ # Terminal CLI interface
│ ├── src/agent_cli/
│ │ ├── app.py # REPL orchestrator
│ │ ├── renderer.py # Rich terminal output
│ │ └── commands.py # Slash commands (/plan, /model, etc.)
│ └── tests/ # CLI test suite
├── agent-ui/ # Developer web frontend (no build step)
│ ├── index.html
│ ├── css/styles.css
│ └── js/ # ES modules (app, renderer, websocket, etc.)
├── agent-user-ui/ # User-facing web frontend (no build step)
│ ├── index.html
│ ├── css/styles.css # Forest Canopy light theme
│ └── js/ # ES modules (app, renderer, websocket, etc.)
├── docs/ # Translated root READMEs
├── .github/ # CI, issue templates, PR template
├── HOW_IT_WORKS.md # Architecture guide for the runtime stack
├── CONTRIBUTING.md # Contribution guidelines
├── CODE_OF_CONDUCT.md # Community expectations
├── SECURITY.md # Vulnerability disclosure policy
├── LICENSE # Business Source License 1.1
├── .env.example # Environment variable reference
└── REMOTE-CONTROL.md # Notes for remote-control usage
See docs/REPOSITORY.md for a path-by-path map of the monorepo and maintainer notes about the preserved pre-monorepo histories.
# Backend
cd agent-api && .venv/bin/python -m pytest tests/ -v
# CLI
cd agent-cli && .venv/bin/python -m pytest tests/ -v
# Developer UI
cd agent-ui && npm test
# User UI
cd agent-user-ui && npm test
# Lint + type check
cd agent-api && .venv/bin/ruff check src/ tests/
cd agent-cli && .venv/bin/ruff check src/ tests/Set environment variables in agent-api/.env:
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER |
anthropic |
LLM backend to use (anthropic or openai) |
ANTHROPIC_API_KEY |
(required) | Your Anthropic API key |
ANTHROPIC_BASE_URL |
unset | Optional API endpoint override |
OPENAI_API_KEY |
(required for OpenAI) | Your OpenAI API key |
OPENAI_BASE_URL |
unset | Optional OpenAI-compatible endpoint |
MODEL |
claude-sonnet-4-5-20250929 |
Default model |
WORKSPACE_DIR |
workspace |
Where agent files are created |
ENABLE_MEMORY |
true |
Cross-session memory |
MAX_TURNS |
50 |
Max agent loop iterations |
MAX_TOKEN_BUDGET |
200000 |
Token spending limit per session |
OPENAGENT_TIMEOUT |
1800 |
CLI agent loop hard timeout (seconds) |
- OpenAgent currently has no application-level authentication or user isolation.
- Conversation history is shared at the deployment level. Any client that can reach the API can list, read, and delete conversations.
- Workspace files are created under
WORKSPACE_DIRand are ephemeral by design. This is intentional: the workspace is framed as a temporary execution sandbox for each session, not durable user storage. - After a WebSocket session disconnects, the backend schedules workspace cleanup after
WORKSPACE_CLEANUP_DELAYseconds. - Conversation history is stored separately in the SQLite database (
agent.dbby default) and is not deleted by workspace cleanup.
# OpenAI
LLM_PROVIDER=openai OPENAI_API_KEY=your-key MODEL=gpt-4.1
# Anthropic-compatible endpoint
ANTHROPIC_BASE_URL=https://api.deepseek.com/anthropic MODEL=deepseek-chat
# Any other compatible backend
# Implement or extend the adapter layer in agent-api/src/agent_service/agent/llm.py| Document | Audience | Description |
|---|---|---|
| README.md | Everyone | Product overview, setup, testing, and configuration |
| HOW_IT_WORKS.md | Contributors | Architecture walkthrough of the runtime stack |
| docs/REPOSITORY.md | Contributors | Monorepo layout and maintainer notes |
| CLAUDE.md | AI agents / developers | Comprehensive technical reference |
| CONTRIBUTING.md | Contributors | Branch naming, commit format, PR checklist |
| CODE_OF_CONDUCT.md | Community | Expected behavior and enforcement process |
| SECURITY.md | Security researchers | Private vulnerability disclosure guidance |
| REMOTE-CONTROL.md | Operators | Remote-control setup and operational notes |
| .env.example | Operators | All environment variables with descriptions |
| docs/README_zh.md | Chinese readers | Chinese translation of the root README |
| docs/README_ja.md | Japanese readers | Japanese translation of the root README |
| docs/README_ko.md | Korean readers | Korean translation of the root README |
| docs/README_es.md | Spanish readers | Spanish translation of the root README |
| docs/README_fr.md | French readers | French translation of the root README |
Contributions are welcome! See CONTRIBUTING.md for full guidelines. Some good starting points:
- Add a new tool — copy
agent-api/src/agent_service/agent/tools/compact_tool.py, modify, register inloop.py - Add a new skill — create
agent-api/skills/your-skill/SKILL.md - Add a new preset — create
agent-api/prompts/your-preset/PROMPT.md - Add a new LLM backend — implement the
LLMClientprotocol inagent/llm.py - Improve the Developer UI — edit files in
agent-ui/directly (no build step) - Improve the User UI — edit files in
agent-user-ui/directly (no build step)
Please run the test suites before submitting (CI runs these automatically on PRs):
cd agent-api && .venv/bin/python -m pytest tests/ -v
cd agent-cli && .venv/bin/python -m pytest tests/ -v
cd agent-ui && npm test
cd agent-user-ui && npm testYou can also run all checks at once with pre-commit:
pre-commit run --all-filesBusiness Source License 1.1 (BSL 1.1)
See LICENSE for the Additional Use Grant, Change Date, and Change License.
OpenAgent is source-available under BSL 1.1, not OSI open source.
For commercial licensing inquiries, contact Walden AI Lab through the repository owner contact channel on the repository hosting platform.
For security issues, use SECURITY.md. For community expectations, use CODE_OF_CONDUCT.md.
Built as a learn-by-doing reference implementation for beginners, using production-style agent patterns with a provider-independent LLM adapter layer.



