A fully autonomous, agent-swarm-based IT company system powered by multiple LLM providers (Ollama, OpenAI, Claude, OpenRouter, DeepSeek, Qwen). The system simulates a real IT company with specialized AI agents collaborating to deliver IT services end-to-end, with file generation, MCP tools, and Human-in-the-Loop (HITL) escalation.
# Set environment and start
$env:OLLAMA_URL="http://localhost:11434" # Windows
python main.py --mode web
# Access dashboard at http://localhost:5000
# Configure LLM and agents in Settings| Document | Description |
|---|---|
| Getting Started | Quick start and usage examples |
| Features | Key features and capabilities |
| Architecture | System design and agent architecture |
| Configuration | Complete configuration reference |
| Agent Tools | NEW MCP, web search, and tools |
| Dashboard & API | Web dashboard and API documentation |
| Document | Description |
|---|---|
| Installation Guide | Detailed setup instructions |
| Agents Guide | Agent roles and capabilities |
| Task Management | Task control, progress, and pause/resume |
| HITL Guide | Human-in-the-Loop escalation system |
| API Reference | REST API reference |
| Troubleshooting | Common issues and solutions |
AutoIT Swarm simulates a complete IT company with 12 specialized AI agents that collaborate to deliver IT services:
- Leadership: CEO, Project Manager
- Development: Senior/Junior Developers, Solutions Architect
- Quality: QA Testing, Security Expert
- Operations: DevOps Engineer, Database Administrator
- Support: Support Specialist, Billing Agent
- Marketing: Marketing Agent (NEW)
Plus dynamic sub-agents that can be created on-the-fly for specialized tasks.
- Ollama - Local LLM (free, private)
- OpenAI - GPT-4, GPT-3.5 Turbo
- Claude - Anthropic Claude 3 family
- OpenRouter - Multi-provider gateway
- DeepSeek - DeepSeek AI models
- Qwen - Alibaba Qwen models
- Web Search - Research via Tavily/Serper
- Web Fetch - Get content from URLs
- Document Search - Internal knowledge base
- Code Execution - Safe Python execution
- File Read/Write - Access allowed directories
- Code: HTML, CSS, JavaScript, Python
- Data: JSON, CSV, Excel (XLSX)
- Documents: Markdown, PDF, Word (DOCX)
- Images: PNG, JPEG
- Auto-saved to
./output/{project_id}/
- Projects Page - Card/list view, deliverables download
- Settings Page - 6 tabs (LLM, HITL, Output, Webhooks, Agents, System)
- Agent Editor - Configure capabilities per agent
- Real-time Updates - Socket.IO integration
- Pause/Resume controls
- Progress percentage tracking (0-100%)
- Task timeout (auto-fail old tasks)
- Infinite loop prevention
- Retry logic with max retries
- 8 escalation triggers
- Web-based review panel
- Decision tracking and audit
- Email/Webhook notifications
# Clone the repository
git clone <repository-url>
cd KimiAS
# Install dependencies
pip install -r requirements.txt
# Configure your LLM provider
# Option 1: Local Ollama (no API key needed)
$env:OLLAMA_URL="http://localhost:11434"
# Option 2: Cloud providers (set API key in Settings)
# OpenAI, Claude, OpenRouter, DeepSeek, QwenSee Installation Guide for detailed instructions.
# Web dashboard mode (recommended)
python main.py --mode web
# Access dashboard at http://localhost:5000
# Configure LLM provider in Settings β LLM Providers
# Single request mode
python main.py --request "Build me a REST API for user authentication"
# Check system status
python main.py --statusAccess the dashboard at http://localhost:5000 for:
| Page | Description |
|---|---|
| Dashboard | Overview with stats, health, performance |
| Tasks | Task board with pause/resume controls |
| Agents | Agent status and activity |
| Projects | Projects with card/list view, deliverables |
| HITL | Human review panel for escalations |
| Settings | LLM, Output, Webhooks, Agents configuration |
- 4 Stats Cards: Total, In Progress, Completed, Pending HITL
- Task Status Chart: Visual breakdown by status
- Recent Tasks: Last 5 tasks with assignee
- Agent Activity: Active agents with task counts
- System Health: LLM, Task Board, Message Queue, Memory
- Active Projects: Project progress bars
- Performance: Completion rate, escalation rate, avg confidence
curl -X POST http://localhost:5000/api/submit-request \
-H "Content-Type: application/json" \
-d '{"request": "Build a user authentication system"}'curl -X POST http://localhost:5000/api/webhook/incoming \
-H "Authorization: Bearer your-token" \
-H "Content-Type: application/json" \
-d '{"description": "Build a login page", "agent": "SeniorDeveloper"}'curl -X POST http://localhost:5000/api/mcp/execute \
-H "Content-Type: application/json" \
-d '{"tool": "web_search", "arguments": {"query": "Python best practices"}}'curl -X PUT http://localhost:5000/api/agents/config/SeniorDeveloper/capabilities \
-H "Content-Type: application/json" \
-d '{"internet_access": true, "mcp": true, "document_access": false}'Configure via Settings page in the dashboard or config.yaml:
# LLM Provider
llm:
provider: "ollama" # or openai, claude, openrouter, deepseek, qwen
url: "http://localhost:11434"
model: "lfm2.5-thinking:latest"
temperature: 0.7
max_tokens: 4096
# Web Search (for agent internet access)
web_search:
enabled: true
provider: "tavily"
api_key: "your-api-key"
# Output Destinations
output:
file:
enabled: true
directory: "./output"
format: "json"
# Webhooks
webhooks:
outgoing:
enabled: true
url: "https://your-system.com/webhook"
incoming:
enabled: true
auth_token: "your-secret-token"
# MCP Servers
mcp:
enabled: true
servers:
filesystem:
url: ""
api_key: ""See Configuration Guide for all options.
# Submit via dashboard
# "Create a responsive login page with HTML and CSS"
# Files generated:
# ./output/{project_id}/login_page.html
# ./output/{project_id}/styles.css
# Download from: Projects β Your Project β Deliverables- Settings β Agents β Edit SeniorDeveloper
- Enable "Internet Access"
- Submit: "Research best Python web frameworks 2026"
- Agent searches web and generates report
# Configure webhook in Settings
# External system creates task:
curl -X POST http://localhost:5000/api/webhook/incoming \
-H "Authorization: Bearer token" \
-d '{"description": "Fix login bug", "agent": "SeniorDeveloper"}'
# Receive completion events at your endpoint- Documentation Index
- Getting Started
- Agent Tools Guide - MCP and tools
- Configuration Guide
- Dashboard Guide
- Ollama Documentation
- OpenAI Documentation
- Tavily Web Search
Need Help? Check the Troubleshooting Guide or open an issue.
Version: 2.0 | Last Updated: March 2026