This file is the repository's stable agent constitution. Keep it small. Keep it durable. Keep it routing-focused.
- You are authorized to plan, design, implement, test, and document Netclaw.
- Default to smallest safe change that advances MVP.
- Prefer explicit tradeoffs over hidden complexity.
- Netclaw is an open-source, self-hosted autonomous operations agent built on Akka.Agents.
- MVP is single process, actor-driven, and persistence-backed.
- Session identity is Slack thread:
{channelId}/{threadTs}. - Security posture is default deny.
Read first:
PROJECT_CONTEXT.mdTOOLING.mdIMPLEMENTATION_PLAN.mddocs/prd/README.md.opencode/skills/netclaw-*/SKILL.md.claude/skills/ralph-*.md- relevant
openspec/specs/*/spec.md
Use these modes based on requested outcome.
Use when producing PRDs, specifications, risk analysis, IA, mockups, or execution plans.
Expected outputs:
- updates in
docs/prd/,docs/spec/,docs/ui/ - OpenSpec changes and spec deltas in
openspec/ - explicit traceability to PRD IDs
Use when implementing production code, tests, and runtime wiring.
Expected outputs:
- code changes with validation steps
- matching spec updates when behavior changes
- no undocumented behavior drift
You MUST use OpenSpec skills for all planning and spec work. Do not manually create or edit OpenSpec artifacts (specs, changes, proposals, delta specs, design docs, task files). Use the skills listed below.
/opsx-new— create a new OpenSpec change/opsx-continue— create next artifact in the change workflow/opsx-ff— fast-forward: generate all remaining artifacts at once
/opsx-apply— implement tasks from an OpenSpec change
/opsx-sync— sync delta specs from a change to main specs/opsx-verify— verify implementation matches change artifacts/opsx-archive— archive a completed change
/opsx-explore— think through ideas before creating a change/opsx-onboard— guided walkthrough of the full OpenSpec workflow/opsx-bulk-archive— archive multiple completed changes
Hard rule: If you need to create or modify files under openspec/, use the
appropriate skill above. The only exception is updating task checkboxes in
openspec/changes/*/tasks.md during RALPH iterations.
Before coding a capability, discover in this order:
- matching PRD in
docs/prd/ - matching engineering spec in
docs/spec/ - matching OpenSpec capability in
openspec/specs/ - active change plan in
openspec/changes/<name>/
If planning and implementation artifacts conflict, fix planning artifacts first. If discovery artifacts conflict with each other, update them before implementing.
When adding or changing properties on any *Config type in Netclaw.Configuration,
update src/Netclaw.Configuration/Schemas/netclaw-config.v1.schema.json in the same PR.
The schema uses "additionalProperties": false throughout — any new property that is
missing from the schema will be rejected by ConfigSchemaDoctorCheck at runtime.
Migration-friendly schema changes: netclaw doctor --fix uses SchemaFixResolver to
auto-fix common schema validation errors. To ensure smooth upgrades for existing configs:
- When adding a new required property, include a
"default"value in the schema so the fix resolver can insert it automatically. - When adding an enum property, always use
"type": "string"with named values (not integers) so the resolver can coerce stale numeric values. - When removing a property, no special action is needed — the resolver detects and
removes properties disallowed by
additionalProperties: false.
- secure-by-default behavior for gateway and tools
- no hidden bypasses around ACL/policy checks
- no north-star/deferred features in MVP without explicit PRD update
- actor boundaries remain transport-agnostic (pub/sub over direct transport asks)
- persistence types remain framework-owned and serialization-safe
- No silent fallbacks. When something fails or is misconfigured, fail loudly — do not silently degrade to a default. Fallbacks hide bugs and misconfigurations, and on security-relevant paths they can silently escalate privileges. A fallback is only justified when partial failure is a normal runtime condition (e.g., a network retry). If you think you need one, ask first.
- no new Slopwatch violations: run
/dotnet-skills:slopwatchafter code changes - use
TimeProvider(notDateTime.UtcNow/DateTimeOffset.UtcNow) so time can be virtualized in tests. InjectTimeProvidervia DI; default toTimeProvider.Systemin production. Standardize onDateTimeOffset, notDateTime. Usage:_timeProvider.GetUtcNow()returnsDateTimeOffset,.ToUnixTimeMilliseconds()for persistence timestamps. - NEVER add implicit conversions to/from primitive types on value objects.
Value objects exist to prevent accidental misuse — an implicit conversion back
to the primitive defeats the purpose. Use
.Valuefor explicit access and explicit casts where truly needed. If a value object can silently become a string, it provides no more safety than a raw string. - Comments: skip noise, keep signal. Don't narrate what code does when
identifiers already say it (
// increment counter). Do write comments that help a human reviewer scanning in isolation: security gate explanations (what's blocked, for which audience, why), hidden constraints, subtle invariants, non-obvious fallback strategies, and cross-cutting concerns that aren't visible from the call site. A good comment answers "why would this surprise me?" — not "what does this line do?"
- Do not write tests for trivial code — string formatting, simple concatenation, constructor assignment, and other zero-logic paths are not worth testing.
- Tests should exercise meaningful behavior: state transitions, error handling, serialization round-trips, routing decisions, integration boundaries.
- If the test is just asserting that
$"{a}/{b}"equals"a/b", delete it. - Prefer fewer tests that cover real behavior over many tests that pad coverage.
- NEVER use
Thread.SleeporTask.Delayin tests to wait for conditions. This is a design smell, not just a test smell — if you need a sleep to make a test pass, the production code lacks a proper synchronization signal. Fix the design:- Add request/response acks (e.g.,
Ask<CommandAck>) so callers know a state transition has occurred before proceeding. - Use Akka.TestKit's
AwaitAssertAsyncfor polling assertions on async state. Task.Delayin fake/mock services to simulate latency is acceptable only in the fake itself, never in test orchestration logic.
- Add request/response acks (e.g.,
After any code changes, run:
dotnet slopwatch analyze # Detect reward hacking (new violations fail CI)
./scripts/Add-FileHeaders.ps1 -Verify # Ensure all .cs files have copyright headersSlopwatch detects: disabled/skipped tests, suppressed warnings, empty catch
blocks, hardcoded values, TODO-as-done comments. Baseline is in
.slopwatch/baseline.json — existing entries are accepted, new violations
must be fixed or explicitly baselined with justification.
Run the behavioral eval suite (./evals/run-evals.sh) when changing:
- Identity file templates (
SOUL.md,AGENTS.md,TOOLING.mdin init wizard) - System prompt assembly (
SystemPromptAssembler,FileSystemPromptProvider) - Skill content (any
SKILL.mdunderfeeds/skills/.system/files/) - Skill matching logic (
SkillRegistry,SystemSkillSyncServicekeyword handling) - Memory pipeline (
SQLiteMemoryRecallCoordinator,MemoryProposalGate, checkpoint triggers) - Compaction logic (
ObservationPromptBuilder,ExtractiveSessionReducer, compaction behavior) - Tool definitions (new tools, changed tool schemas, grant categories)
- Model/provider changes (switching models, changing context window config)
SessionConfigdefaults
Update eval cases when:
- Adding a new system skill — add a skill auto-load case
- Adding a new tool — add a tool discovery/use case
- Changing identity grounding rules — update identity assertion patterns
- A production session exhibits a new failure pattern — add a regression case
Debugging eval failures: Eval failures are VERY RARELY the model's fault. Almost always the root cause is an instrumentation issue — how we're parsing or asserting on the model's output (regex mismatch, output format change, assertion too brittle). If instrumentation checks out, the next most likely cause is a genuine alignment problem (system prompt, skill content, or context assembly not giving the model the right information). Only after ruling out both should you consider model capability as the cause.
System skills in feeds/skills/.system/files/ are the agent's operational
guidance — they tell the running agent how to use features. When you change a
feature area, the corresponding skill must be updated in the same PR.
| Feature area changed | Skill to update |
|---|---|
| Identity files, SOUL/AGENTS/TOOLING paths, progressive disclosure | netclaw-identity |
| Memory provider routing, SQLite memory tools, general memory guidance | netclaw-memory |
| Config format, daemon health, logs, MCP wiring, diagnostics CLI, doctor | netclaw-operations |
| Skill file format, discovery, authoring workflow | skill-authoring |
| Tool definitions, CLI commands, grant categories, search_tools, scheduling tools | netclaw-operations |
| Search tool behavior, citation policy, web_search/web_fetch usage guidance | search-citation |
| Workspaces, project creation/discovery | netclaw-projects |
Workflow:
- Edit the skill at
feeds/skills/.system/files/{name}/SKILL.md - Bump
metadata.versionin the YAML frontmatter - Do NOT run
generate-skill-manifest.shlocally — CI generates the manifest and publishes to R2 on release tags or manualworkflow_dispatch
Publishing:
- Skills publish automatically as part of the binary release workflow (on git tags)
- For hot-patches without a daemon release:
gh workflow run publish_skills.yml - Normal dev pushes do NOT publish to the live feed
If a new feature area needs agent guidance, create a new skill file and add a row to this table.
Done means all of the following are true:
- behavior aligns with PRD + spec
- acceptance criteria are testable and verified
dotnet slopwatch analyzepasses (no new violations)- copyright headers present on all
.csfiles (./scripts/Add-FileHeaders.ps1 -Verify) - operational impact is documented (runbooks or CLI help)
- OpenSpec artifacts are updated or archived appropriately
- system skills updated if a mapped feature area was changed (see table above)
- eval suite passes for changes to identity, skills, memory, or tools (see Eval Suite section)
IMPORTANT: Prefer retrieval-led reasoning over pretraining for any .NET work. Workflow: skim repo patterns -> consult dotnet-skills by name -> implement smallest-change -> note conflicts.
Routing (invoke by name):
- Akka.NET: akka-best-practices, akka-hosting-actor-patterns, akka-testing-patterns
- C# / code quality: csharp-coding-standards, csharp-concurrency-patterns, csharp-api-design, csharp-type-design-performance
- DI / config: microsoft-extensions-dependency-injection, microsoft-extensions-configuration
- Serialization: serialization
- Testing: akka-testing-patterns, snapshot-testing, testing-strategy
- Project structure: project-structure, package-management
Quality gates (use when applicable):
- dotnet-skills:slopwatch — after substantial new/refactor/LLM-authored code
- dotnet-skills:crap-analysis — after tests added/changed in complex code
Specialist agents:
- akka-net-specialist, dotnet-concurrency-specialist, dotnet-performance-analyst, dotnet-benchmark-designer
- If a workflow repeats twice, extract or refine a skill/workflow doc.
- Put volatile detail in repo-owned docs, not this constitution.
- Keep this file stable and high-signal.
- Compressing a skill into a rule is a retrieval operation, not a memory operation. When distilling a dotnet-skills skill (or any upstream authority) into an audit rule, review rubric, or one-line bullet, open the skill first. Preserve the distinctions the skill draws — not just its headline. Rules that collapse two orthogonal axes into one sentence produce false positives that burn trust in the audit. If a rule cannot be written without losing a distinction the source makes, drop it rather than eliding.