Skip to content

wizzense/localclaw

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8,748 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LocalClaw — Local-First AI Assistant

LocalClaw

Your AI assistant, running entirely on your machine.
LCARS-inspired interface. Local models. Zero cloud dependency.

MIT License

LocalClaw is a local-first personal AI assistant CLI built on OpenClaw. It gives you the full power of the OpenClaw agent runtime — gateway, tools, sessions, skills — but defaults to local model providers like Ollama, LM Studio, and vLLM. No cloud keys required to get started.

The gateway dashboard features an LCARS-inspired interface (Library Computer Access/Retrieval System) — the iconic Star Trek computer display design — with a matching terminal UI color scheme.

Coexistence: LocalClaw installs as a separate localclaw binary with its own state directory (~/.localclaw/) and config file (~/.localclaw/openclaw.local.json). It runs side-by-side with a standard openclaw installation without any interference. Different state directory, different config, different gateway port, same machine.

Why LocalClaw?

  • Zero cloud dependency — point it at Ollama, LM Studio, or vLLM and go.
  • Isolated state~/.localclaw/ keeps sessions, locks, and agent data fully separate from any existing OpenClaw installation.
  • First-run onboarding — detects local model servers and walks you through picking a default model.
  • Full OpenClaw feature set — gateway, TUI, agent, browser control, skills, sessions, tools — all via localclaw <command>.
  • Separate gateway port — defaults to port 18790 so it doesn't conflict with an OpenClaw gateway on 18789.

What's New

LocalClaw has gained a suite of intelligent features that transform it from a basic local chat interface into a proactive, self-managing AI assistant.

Startup Health Check

The gateway now validates your entire model stack on every boot:

  • Confirms your model server (Ollama, LM Studio, vLLM) is reachable
  • Verifies your configured model is actually available
  • Checks that your model's context window meets minimum requirements
  • Logs clear warnings if anything is misconfigured — no more silent failures

Smart Model Routing

Simple queries (greetings, yes/no questions, quick lookups) are automatically routed to a faster, smaller model while complex requests (code generation, analysis, multi-step reasoning) go to your primary model. This saves time and compute without sacrificing quality. Configured via agents.defaults.routing in your config.

Session Auto-Save

Every agent turn is automatically logged to memory/sessions/ as a timestamped markdown file. Session logs include user messages, assistant responses, model info, and token counts. No more lost conversations.

Dashboard Session Browser

Browse and search past session transcripts directly in your browser:

  • /sessions — full session browser UI with dark LCARS-inspired theme
  • /api/sessions — REST API for listing, searching, and retrieving session logs
  • Full-text search across all sessions

Proactive Intelligence

On gateway startup, the proactive-briefing hook reads your recent session logs (last 24h) and writes a context summary to memory/briefing-context.md. This gives the agent awareness of recent conversations for context-aware morning briefings and follow-up reminders based on what you discussed yesterday.

Workflow Automation Engine

Define event-driven, multi-step pipelines as simple YAML files in workspace/workflows/:

name: startup-log
trigger:
  event: gateway:startup
steps:
  - action: write-file
    path: memory/startup-log.md
    content: "Gateway started"
    append: true
  - action: notify
    message: "System ready"

Supports three step types (agent-turn, notify, write-file) and two trigger modes (event for hook-driven, schedule for cron-based).

Deep OS Integration

  • Clipboard — full read/write clipboard access (pbpaste/pbcopy on macOS, xclip/wl-paste on Linux)
  • Focus Mode — suppress heartbeat delivery during deep work sessions, with auto-expiry and buffered alerts
  • Workspace File Watcher — monitors workspace files for changes and fires workspace:file-changed hook events with debouncing

Learning and Personalization

The user-learning hook observes your interactions and builds a preference profile over time:

  • Active hours — when you typically interact
  • Message style — average length, question frequency
  • Tool preferences — which tools/actions you request most
  • Topic frequency — common themes in your conversations

Stored at memory/user-preferences.json and available for other hooks to personalize behavior.

Enhanced Multimodal Pipeline

  • Document Indexer — auto-indexes text files from workspace/documents/ for agent context
  • Diagram Pipeline — detects Mermaid code blocks in agent output and renders them to SVG (via mmdc) or saves .mmd source files
  • Voice Pipeline — STT via whisper-cpp, TTS via macOS say, with automatic capability detection

Smart Context Management for Small Models

Local models typically have much smaller context windows (8K-32K tokens) compared to cloud models (128K-200K+). LocalClaw includes a multi-layered context management system designed to deliver a great agentic experience even within these constraints. All of this is automatic — no configuration needed.

What LocalClaw does differently

1. Aggressive context pruning (always-on)

Unlike cloud-optimized setups that only prune when cache TTL expires, LocalClaw prunes every turn:

  • Tool results are soft-trimmed at just 20% context usage (keeping only head/tail summaries)
  • Full tool results are cleared at 40% usage with a placeholder
  • Each tool result is capped at 2K characters (vs 8K for cloud models)

2. Proactive memory persistence

The agent is instructed to write progress, decisions, and state to memory/ files in your workspace after every meaningful step — not just before compaction. This means context that would be lost during summarization is safely on disk.

  • memory/state.md — current task state, modified files, decisions
  • memory/progress.md — completed steps and findings
  • memory/plan.md — task decomposition for multi-step work
  • memory/notes.md — learned preferences and project conventions

3. Tighter compaction with early memory flush

When the context window fills up, LocalClaw summarizes old history more aggressively:

  • History is capped at 30% of the context window (vs 50% for cloud)
  • Memory flush triggers every compaction cycle (not just near the threshold)
  • Reserve tokens floor is set to 2K (vs 20K), appropriate for small windows

4. Compact system prompts

Bootstrap files (AGENTS.md, SOUL.md, etc.) are capped at 8K characters total, leaving more room for actual conversation and tool results.

5. Task decomposition

The agent automatically breaks complex tasks into discrete steps, persisting plans and intermediate results to disk so it can recover from context compaction without losing track of multi-step work.

Tuning (optional)

The defaults work well out of the box, but you can override any setting in ~/.localclaw/openclaw.local.json:

{
  agents: {
    defaults: {
      // Context pruning
      contextPruning: {
        mode: "always",       // "always" | "cache-ttl" | "off"
        softTrimRatio: 0.2,   // Start trimming at 20% of context
        hardClearRatio: 0.4,  // Clear old results at 40%
        softTrim: { maxChars: 2000 },
      },
      // Compaction
      compaction: {
        maxHistoryShare: 0.3,       // Cap history at 30% of window
        reserveTokensFloor: 2000,
        memoryFlush: {
          compactionInterval: 1,    // Flush memories every compaction
          softThresholdTokens: 2000,
        },
      },
      // System prompt budget
      bootstrapMaxChars: 8000,
    },
  },
}

Supported local model providers

Provider Default endpoint
Ollama http://127.0.0.1:11434/v1
LM Studio http://127.0.0.1:1234/v1
vLLM http://127.0.0.1:8000/v1

You can also point LocalClaw at any OpenAI-compatible API endpoint via the config or onboarding wizard.

Prerequisites

  1. Node 22+ — check with node -v
  2. pnpm — install with npm install -g pnpm if you don't have it
  3. A local model serverOllama is the easiest to get started with:
# Install Ollama (macOS)
brew install ollama

# Start the Ollama server
ollama serve

# Pull a model (in a separate terminal)
ollama pull qwen3:8b

Any model works. Good starting points: qwen3:8b, llama3.1, gemma3:12b, glm-4.7-flash. Larger models give better results but need more RAM.

Install

git clone https://github.com/sunkencity999/localclaw.git
cd localclaw
pnpm install
pnpm build

Optionally install globally so localclaw is available everywhere:

npm install -g .

Quick start

Follow these steps in order. If you installed globally, replace pnpm localclaw with localclaw.

Step 1 — First-run setup

pnpm localclaw

On first run, LocalClaw detects your running model server, lists available models, and walks you through picking a default. This creates your config at ~/.localclaw/openclaw.local.json.

Important: Make sure your model server (e.g. Ollama) is running before this step so LocalClaw can discover your models automatically.

Step 2 — Start the gateway

The gateway is the background service that manages agent sessions, tools, and events:

pnpm localclaw gateway

Leave this running in its own terminal (or add --verbose to see detailed logs).

Step 3 — Chat

Open a new terminal and launch the TUI:

pnpm localclaw tui

Type a message and hit Enter. You're talking to a local AI agent with full tool access.

Other useful commands

# One-shot agent query (no TUI)
pnpm localclaw agent --message "Summarize this project"

# Check gateway and model status
pnpm localclaw status

# Diagnose issues
pnpm localclaw doctor

Configuration

LocalClaw stores its config at ~/.localclaw/openclaw.local.json. Minimal example:

{
  agent: {
    model: "ollama/llama3.1",
  },
  gateway: {
    mode: "local",
    port: 18790,
  },
}

Use localclaw configure to interactively edit settings, or localclaw config set <key> <value> for quick changes.

Full configuration reference (all keys + examples): OpenClaw Configuration

How it works

            Your local models
      (Ollama / LM Studio / vLLM)
                   │
                   ▼
┌───────────────────────────────┐
│        LocalClaw Gateway      │
│         (control plane)       │
│      ws://127.0.0.1:18790     │
└──────────────┬────────────────┘
               │
               ├─ Agent runtime (RPC)
               ├─ CLI (localclaw …)
               ├─ TUI
               ├─ Browser control
               └─ Skills + tools

Coexistence with OpenClaw

LocalClaw is designed to run alongside a standard OpenClaw installation:

OpenClaw LocalClaw
Binary openclaw localclaw
Config file ~/.openclaw/openclaw.json ~/.localclaw/openclaw.local.json
Profile (default) local
Gateway port 18789 18790
State directory ~/.openclaw/ ~/.localclaw/

Both can be installed globally and run simultaneously. They use completely separate state directories, configs, sessions, and gateway instances — no shared locks or files.

All OpenClaw features included

LocalClaw inherits the full OpenClaw platform. Every command and feature works — just use localclaw instead of openclaw:

  • Gateway — WebSocket control plane for sessions, tools, and events
  • Multi-channel inbox — WhatsApp, Telegram, Slack, Discord, Signal, iMessage, and more
  • Browser control — dedicated Chrome/Chromium with CDP control
  • Skills — bundled, managed, and workspace skills
  • Agent sessions — multi-session with agent-to-agent coordination
  • Tools — bash, browser, canvas, cron, nodes, and more

For the full feature reference, see the OpenClaw docs.

Built on OpenClaw

LocalClaw is a fork of OpenClaw, the personal AI assistant built by Peter Steinberger and the community.

Community

See CONTRIBUTING.md for guidelines and how to submit PRs.

Thanks to the OpenClaw clawtributors:

steipete cpojer plum-dawg bohdanpodvirnyi iHildy jaydenfyi joshp123 joaohlisboa mneves75 MatthieuBizien MaudeBot Glucksberg rahthakor vrknetha radek-paclt vignesh07 Tobias Bischoff sebslight czekaj mukhtharcm maxsumrall xadenryan VACInc Mariano Belinky rodrigouroz tyler6204 juanpablodlc conroywhitney hsrvc magimetal zerone0x meaningfool patelhiren NicholasSpisak jonisjongithub abhisekbasu1 jamesgroat claude JustYannicc Hyaxia dantelex SocialNerd42069 daveonkels google-labs-jules[bot] lc0rp mousberg adam91holt hougangdev gumadeiras shakkernerd mteam88 hirefrank joeynyc orlyjamie dbhurley Eng. Juan Combetto TSavo aerolalit julianengel bradleypriest benithors rohannagpal timolins f-trycua benostein elliotsecops christianklotz nachx639 pvoo sreekaransrinath gupsammy cristip73 stefangalescu nachoiacovino Vasanth Rao Naik Sabavat petter-b thewilloftheshadow leszekszpunar scald andranik-sahakyan davidguttman sleontenko denysvitali sircrumpet peschee nonggialiang rafaelreis-r dominicnunez lploc94 ratulsarna sfo2001 lutr0 kiranjd danielz1z AdeboyeDN Alg0rix Takhoffman papago2355 clawdinator[bot] emanuelst evanotero KristijanJovanovski jlowin rdev rhuanssauro joshrad-dev obviyus osolmaz adityashaw2 CashWilliams sheeek ryancontent jasonsschin artuskg onutc pauloportella HirokiKobayashi-R ThanhNguyxn kimitaka yuting0624 neooriginal manuelhettich minghinmatthewlam baccula manikv12 myfunc travisirby buddyh connorshea kyleok mcinteerj dependabot[bot] amitbiswal007 John-Rood timkrase uos-status gerardward2007 roshanasingh4 tosh-hamburg azade-c badlogic dlauer JonUleis shivamraut101 bjesuiter cheeeee robbyczgw-cla YuriNachos Josh Phillips pookNast Whoaa512 chriseidhof ngutman ysqander Yurii Chukhlib aj47 kennyklee superman32432432 grp06 Hisleren shatner antons austinm911 blacksmith-sh[bot] damoahdominic dan-dr GHesericsu HeimdallStrategy imfing jalehman jarvis-medmatic kkarimi mahmoudashraf93 pkrmf RandyVentures robhparker Ryan Lisse dougvk erikpr1994 fal3 Ghost jonasjancarik Keith the Silly Goose L36 Server Marc mitschabaude-bot mkbehr neist sibbl abhijeet117 chrisrodz Friederike Seiler gabriel-trigo iamadig itsjling Jonathan D. Rhyne (DJ-D) Joshua Mitchell Kit koala73 manmal ogulcancelik pasogott petradonka rubyrunsstuff siddhantjain spiceoogway suminhthanh svkozak wes-davis zats 24601 ameno- bonald bravostation Chris Taylor dguido Django Navarro evalexpr henrino3 humanwritten larlyssa Lukavyi mitsuhiko odysseus0 oswalpalash pcty-nextgen-service-account pi0 rmorse Roopak Nijhara Syhids Ubuntu xiaose Aaron Konyer aaronveklabs andreabadesso Andrii cash-echo-bot Clawd ClawdFx danballance EnzeD erik-agens Evizero fcatuhe itsjaydesu ivancasco ivanrvpereira Jarvis jayhickey jeffersonwarrior jeffersonwarrior jverdi longmaba MarvinCui mjrussell odnxe optimikelabs p6l-richard philipp-spiess Pocket Clawd robaxelsen Sash Catanzarite Suksham-sharma T5-AndyML tewatia thejhinvirtuoso travisp VAC william arzt zknicker 0oAstro abhaymundhara aduk059 aldoeliacim alejandro maza Alex-Alaniz alexanderatallah alexstyl andrewting19 anpoirier araa47 arthyn Asleep123 Ayush Ojha Ayush10 bguidolim bolismauro championswimmer chenyuan99 Chloe-VP Clawdbot Maintainers conhecendoia dasilva333 David-Marsh-Photo Developer Dimitrios Ploutarchos Drake Thomsen dylanneve1 Felix Krause foeken frankekn fredheir ganghyun kim grrowl gtsifrikas HassanFleyah HazAT hclsys hrdwdmrbl hugobarauna iamEvanYT Jamie Openshaw Jane Jarvis Deploy Jefferson Nunn jogi47 kentaro Kevin Lin kira-ariaki kitze Kiwitwitter levifig Lloyd loganaden longjos loukotal louzhixian martinpucik Matt mini mertcicekci0 Miles mrdbstn MSch Mustafa Tag Eldeen mylukin nathanbosse ndraiman nexty5870 Noctivoro ozgur-polat ppamment prathamdby ptn1411 reeltimeapps RLTCmpe Rony Kelner ryancnelson Samrat Jha senoldogann Seredeep sergical shiv19 shiyuanhai siraht snopoke techboss testingabc321 The Admiral thesash Vibe Kanban voidserf Vultr-Clawd Admin Wimmie wolfred wstock YangHuang2280 yazinsai yevhen YiWang24 ymat19 Zach Knickerbocker zackerthescar 0xJonHoldsCrypto aaronn Alphonse-arianee atalovesyou Azade carlulsoe ddyo Erik latitudeki5223 Manuel Maly Mourad Boustani odrobnik pcty-nextgen-ios-builder Quentin Randy Torres rhjoh Rolf Fredheim ronak-guliani William Stock

About

Your own personal AI assistant configured to work WELL with small, open-source models. Built on Openclaw.

Resources

License

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • TypeScript 82.8%
  • Swift 13.0%
  • Kotlin 1.8%
  • Shell 0.8%
  • CSS 0.6%
  • JavaScript 0.4%
  • Other 0.6%