Feature/agent network#2497
Open
kky wants to merge 8 commits into
Open
Conversation
…s regulation. Add NetworkPolicyProvider interface. If no NetworkPolicyProvider is registered, default core behavior is unchanged — every hook is ?.()-chained on the registry, which returns undefined when nothing is registered. Core (main changes) - src/index.ts: call a hook so a registered provider, if any, can do one-time setup (e.g. build a proxy image). - src/container-runner.ts adds hook for NetworkPolicyProvider to apply Docker args (add `--network`, set `HTTPS_PROXY`, etc) after the OneCLI gateway is wired Schema - agent_groups.internet_access_policy: JSON to store how the agent may access the internet A separate PR will add an `/agent-network` skill with a Squid-based implementation of NetworkPolicyProvider, exposing a UI for per-agent internet access. The skill will also offer a UI for inter-agent plumbing (unidirectional, bidirectional, none) — that side uses the existing `agent_destinations` row mechanism and needs no new core changes.
The `internet-access-policy-field` migration shipped in the previous commit added the column but didn't extend the TypeScript interface. This adds the field so callers can read it without an `as` cast. No behavior change.
…clarations A tiny lookup table from provider name → list of canonical hostnames that agent containers must always be able to reach. Built-in entries ship for `claude` (the Anthropic API); optional provider skills (Ollama, OpenCode, etc.) register their own from their provider files. Used by NetworkPolicyProvider implementations to ensure agents can always reach their model API regardless of the operator's WAN policy. For the Squid-based provider that follows in a subsequent commit, this is what guarantees a `model-only` agent can still hit the model. Hosts use Squid `dstdomain`-style entries (leading dot covers subdomains). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Implements the first concrete NetworkPolicyProvider. Brings up a Squid
proxy container as the mandatory egress for per-agent containers (which
sit on a --internal Docker network with no NAT), and generates per-agent
ACLs from `agent_groups.internet_access_policy`.
Three allow patterns supported:
- WAN/hostname (the bucket cases — `full`, `whitelisted`, `model-only`).
Hostnames go through Squid's `dstdomain` ACLs, forwarded to OneCLI
via cache_peer parent for credential injection. WAN allows are
placed AFTER the global `http_access deny !Safe_ports` so hostname
whitelists stay port-gated to 80/443.
- LAN IP literals (whitelist entries that parse as IPv4). Use `dst`
ACLs and bypass the OneCLI parent (`cache_peer_access onecli deny`
+ `never_direct deny` for the matching src+dst pair) so the request
routes direct from Squid to the LAN host instead of trying to tunnel
via OneCLI. LAN allows are placed BEFORE the Safe_ports deny so
non-standard ports (e.g., a controller on 8838) work.
- CDP forwarding (optional `cdpPort` field on the policy). Same
direct-routing pattern as LAN, but for `host.docker.internal:<port>`
— used by agents driving a host-side Chrome over the Chrome
DevTools Protocol. The Squid container will need a socat sidecar
to handle the raw-TCP/WebSocket path for CDP, which lands in a
follow-up commit.
Agent identity is purely source IP — `acl from_<slug> src <ip>/32`.
Per-agent IPs are allocated under `data/squid/ips.json`. Squid forwards
the agent's Proxy-Authorization header unchanged (`login=PASSTHRU`) so
OneCLI sees the original per-agent token end-to-end. DNS for agent
containers points at a dnsmasq sidecar that NXDOMAINs everything,
logging every query — defense-in-depth on top of `--internal`'s lack of
NAT, plus an audit trail for any agent that tries to bypass HTTPS_PROXY.
`ensure()` is idempotent: checks network → builds image if missing →
sweeps orphan IP allocations → writes config files → starts (or
`-k reconfigure`s) the container. Log rotation runs hourly, monthly
archive, six-month retention.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The Squid container image referenced by the network policy provider.
Alpine + squid + dnsmasq + socat. Three concurrent processes managed
by the entrypoint:
- squid serves the per-agent HTTP/HTTPS proxy + cache_peer forwarding
to OneCLI on the host's bridge network.
- dnsmasq runs as a logging NXDOMAIN black-hole on the egress network
(172.30.0.2:53) — defense-in-depth on top of Docker's --internal
blocking the embedded resolver from forwarding, plus a query log for
auditing agent traffic that tries to bypass HTTPS_PROXY.
- socat instances (one per CDP-using agent) tunnel raw TCP from
172.30.0.2:<agent-cdp-port> to host.docker.internal:<port>.
Source-IP restricted (`range=<agent-ip>/32`) so agent A can't reach
agent B's CDP port. Spawned from `/etc/socat-forwards.conf` (mounted
by the host alongside squid.conf) — empty config = no forwarders.
Why socat for CDP: agent-browser's HTTP discovery honors HTTP_PROXY
fine (goes through Squid normally), but the subsequent WebSocket
connection from its underlying Playwright/ws library doesn't — it
opens a direct TCP socket. With agents on --internal egress (no NAT)
and DNS pointing at the sinkhole, the WS handshake can't reach the
host. socat provides a per-agent raw-TCP shim with source-IP gating;
the agent container resolves host.docker.internal to Squid's egress
IP (via `--add-host`) so the WS handshake lands on socat, which
forwards to the host-side CDP endpoint.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…work operator skill
Two independent bugs that surface in different setups but share a file. 1. host.docker.internal IPv4 preference ------------------------------------ Squid 6 dropped `dns_v4_first` and follows the system resolver's address ordering. On Docker Desktop, host.docker.internal resolves to both an IPv6 ULA and an IPv4 — and the resolver returns the IPv6 first. OneCLI binds only to IPv4, so Squid's first cache_peer attempt silently fails on the IPv6 address and the peer is marked DEAD — every CONNECT then 500s. entrypoint.sh now resolves to IPv4 via `getent ahostsv4`, copies the mounted config to a writable /tmp/squid.conf with the literal IPv4 substituted, and runs squid against the copy. reconfigureSquid() does the same rewrite before sending `squid -k reconfigure` so live policy edits survive the round-trip. No effect on Linux Docker, where host.docker.internal either doesn't exist or returns IPv4 only. 2. Per-agent CDP socat forwarding ------------------------------ The socat-forwards generator used `cdpPort` for both the listen port and the upstream port. But cdpPort is per-agent (each agent needs a unique container-side listen port so the `range=<src-ip>/32` filter actually isolates them), while the host-side Chrome lives on a single shared port. A second agent with cdpPort=9223 ended up forwarding 9223 → 9223, hitting nothing on the host. Introduces `cdpHostPort?: number` on InternetAccessPolicy (default 9222 — Chrome's standard remote-debugging port). Forwards now emit `<cdpPort> <ip>/32 host.docker.internal <cdpHostPort>`, so multiple agents on distinct listen ports can share one host Chrome. Tests cover both directions: multi-agent forwards have distinct listen ports + shared upstream, cdpHostPort override is honored, agents without cdpPort or without IP allocations are skipped, and parsePolicy round- trips cdpHostPort. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fix(squid): IPv4 cache_peer resolution + per-agent CDP forwarding
This was referenced May 17, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Type of Change
.claude/skills/<name>/, no source changes)Description
Depends on #2477 (NetworkPolicyProvider extension point).
Adds agent networking control—both inter-agent (LAN), and agent → internet (WAN).
/add-agent-network, a feature skill that installs a Squid-basedNetworkPolicyProvider/manage-agent-networkoperator skill for managing per-agent networking.Inter-agent networking functionality is unchanged, but it can now be managed with /manage-agent-network skill.
Agent → internet networking functionality is new.
Effect of running
/add-agent-network(after bouncing)nanoclaw-squidcontainer that tunnels every CONNECT through OneCLI (OneCLI remains the sole MITM).nanoclaw-egressDocker network (--internal, no NAT) through the squid container. Agents have no route to the internet except via Squid.ws://bypassesHTTPS_PROXY).full/whitelisted/model-only. The agent's declared provider's API hosts are always allowed./manage-agent-networkoperator skill for ongoing per-agent edits and network management (WAN policy + LAN edges inagent_destinations).Per the feature-skill flow in CONTRIBUTING.md, I think the merge path is for you to extract skill/agent-network from these commits; please lmk if you'd like something done differently.
For Skills