Skip to content

One Hub All LLMs For You | 为个人打造的 LLM API 聚合服务

License

Notifications You must be signed in to change notification settings

kulikulisak/octopus

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

111 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Octopus Logo

Octopus

A Simple, Beautiful, and Elegant LLM API Aggregation & Load Balancing Service for Individuals

English | 简体中文

✨ Features

  • 🔀 Multi-Channel Aggregation - Connect multiple LLM provider channels with unified management
  • ⚖️ Load Balancing - Automatic request distribution for stable and efficient service
  • 🔄 Protocol Conversion - Seamless conversion between OpenAI Chat / OpenAI Responses / Anthropic API formats
  • 💰 Price Sync - Automatic model pricing updates
  • 🔃 Model Sync - Automatic synchronization of available model lists with channels
  • 📊 Analytics - Comprehensive request statistics, token consumption, and cost tracking
  • 🎨 Elegant UI - Clean and beautiful web management panel

🚀 Quick Start

🐳 Docker

Run directly:

docker run -d --name octopus -v /path/to/data:/app/data -p 8080:8080 bestrui/octopus

Or use docker compose:

wget https://raw.githubusercontent.com/bestruirui/octopus/refs/heads/dev/docker-compose.yml
docker compose up -d

📦 Download from Release

Download the binary for your platform from Releases, then run:

./octopus start

🛠️ Build from Source

Requirements:

  • Go 1.24.4
  • Node.js 18+
  • npm or pnpm
# Clone the repository
git clone https://github.com/bestruirui/octopus.git
cd octopus

# 1. Build frontend
cd web

# Using npm
npm install
npm run build

# Or using pnpm
pnpm install
pnpm run build

cd ..

# 2. Move frontend assets to static directory
mv web/out static/

# 3. Start the backend service
go run . start

💡 Tip: The frontend build artifacts are embedded into the Go binary, so you must build the frontend before starting the backend.

🔐 Default Credentials

After first launch, visit http://localhost:8080 and log in to the management panel with:

  • Username: admin
  • Password: admin

⚠️ Security Notice: Please change the default password immediately after first login.

🌐 Environment Variables

Customize configuration via environment variables:

Variable Description Default
OCTOPUS_SERVER_PORT Server port 8080
OCTOPUS_SERVER_HOST Listen address 0.0.0.0
OCTOPUS_DATABASE_PATH Database path data/data.db
OCTOPUS_LOGGING_LEVEL Log level info

📸 Screenshots

🖥️ Desktop

Dashboard Channel Management Group Management
Dashboard Channel Group
Price Management Logs Settings
Price Management Logs Settings

📱 Mobile

Home Channel Group Price Logs Settings
Mobile Home Mobile Channel Mobile Group Mobile Price Mobile Logs Mobile Settings

📖 Documentation

📡 Channel Management

Channels are the basic configuration units for connecting to LLM providers.

Base URL Guide:

The program automatically appends API paths based on channel type. You only need to provide the base URL:

Channel Type Auto-appended Path Base URL Full Request URL Example
OpenAI Chat /chat/completions https://api.openai.com/v1 https://api.openai.com/v1/chat/completions
OpenAI Responses /responses https://api.openai.com/v1 https://api.openai.com/v1/responses
Anthropic /messages https://api.anthropic.com/v1 https://api.anthropic.com/v1/messages

💡 Tip: No need to include specific API endpoint paths in the Base URL - the program handles this automatically.


📁 Group Management

Groups aggregate multiple channels into a unified external model name.

Core Concepts:

  • Group name is the model name exposed by the program
  • When calling the API, set the model parameter to the group name

Load Balancing Modes:

Mode Description
🔄 Round Robin Cycles through channels sequentially for each request
🎲 Random Randomly selects an available channel for each request
🛡️ Failover Prioritizes high-priority channels, switches to lower priority only on failure
⚖️ Weighted Distributes requests based on configured channel weights

💡 Example: Create a group named gpt-4o, add multiple providers' GPT-4o channels to it, then access all channels via a unified model: gpt-4o.


💰 Price Management

Manage model pricing information in the system.

Data Sources:

  • The system periodically syncs model pricing data from models.dev
  • When creating a channel, if the channel contains models not in models.dev, the system automatically creates pricing information for those models on this page, so this page displays models that haven't had their prices fetched from upstream, allowing users to set prices manually
  • Manual creation of models that exist in models.dev is also supported for custom pricing

Price Priority:

Priority Source Description
🥇 High This Page Prices set by user in price management page
🥈 Low models.dev Auto-synced default prices

💡 Tip: To override a model's default price, simply set a custom price for it in the price management page.


⚙️ Settings

Global system configuration.

Statistics Save Interval (minutes):

Since the program handles numerous statistics, writing to the database on every request would impact read/write performance. The program uses this strategy:

  • Statistics are first stored in memory
  • Periodically batch-written to the database at the configured interval

⚠️ Important: When exiting the program, use proper shutdown methods (like Ctrl+C or sending SIGTERM signal) to ensure in-memory statistics are correctly written to the database. Do NOT use kill -9 or other forced termination methods, as this may result in statistics data loss.


🤝 Acknowledgments

  • 🙏 looplj/axonhub - The LLM API adaptation module in this project is directly derived from this repository
  • 📊 sst/models.dev - AI model database providing model pricing data

About

One Hub All LLMs For You | 为个人打造的 LLM API 聚合服务

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 53.8%
  • Go 41.2%
  • Shell 2.4%
  • Other 2.6%