Toprank + OpenClaw is now a closed-loop SEO operator.
It does not just run one-off audits. It can now:
- pull real SEO signals from Google Search Console,
- diagnose opportunities,
- prioritize the next action using learned history,
- persist proposals and safe operational steps,
- schedule follow-up checks,
- score whether changes worked,
- update learned priors,
- and keep going.
This directory adds that multi-site adaptive layer for OpenClaw without replacing the existing Toprank skills.
SEO becomes a continuous system instead of a manual project.
signals -> diagnosis -> action -> follow-up measurement -> scoring -> learned priors -> better next action
In practice, this means OpenClaw can continuously work a portfolio of sites by reading live data, generating the next best move, revisiting outcomes later, and getting smarter from the result.
skills/— OpenClaw wrapper skillsshared/— adapter rules, artifact contract, policy, and trigger docsartifacts/schemas/— JSON schemas for runtime artifactsbin/— small helper scripts for multi-site workspace bootstrappinginstall/— installers/bootstrap helpers
- not a second copy of the SEO skill library
- not a replacement for the Claude plugin surface
- not an auto-publisher by default — publishing is an explicit opt-in. The
base install is read-only / advisory. If you want OpenClaw to POST ready
blog posts to a NotFair Next.js webhook on a cron, pass
--enable-publishertoinstall-openclaw-cron.shand exportNOTFAIR_PUBLISH_TOKENin the cron environment. The publisher only fires for content-calendar entries the user has explicitly flipped tostatus: "ready_to_publish". Contract:openclaw/install/notfair-publisher.md.
The adaptive layer writes runtime state outside the repo by default:
~/.toprank/openclaw
Override with:
export TOPRANK_OPENCLAW_HOME=/custom/path
Run from the Toprank repo root:
./openclaw/install/install.shThat script:
- creates
~/.toprank/openclaw/if needed, - bootstraps
portfolio.jsonandschedule.json, - copies all OpenClaw wrapper skills into
~/.openclaw/skills/, - links support paths so the wrappers can still resolve this repo's canonical
seo/skills.
Why copy instead of symlink wrapper skills directly? OpenClaw skill discovery intentionally rejects symlinks that escape the configured skill root. The installer copies wrappers into the OpenClaw skill root and uses stable support links for repo-relative files.
Verify:
openclaw skills check | grep -i toprank
python3 -m pytest -q openclaw/testsUse this as the setup prompt for a fresh machine or a new OpenClaw instance. Replace the placeholders before pasting.
Set up the Toprank OpenClaw SEO Operator on this machine.
Repo:
- If the Toprank repo already exists locally, use it; do not reclone.
- Otherwise clone https://github.com/nowork-studio/toprank and cd into the repo root.
Install:
1. Run: ./openclaw/install/install.sh
2. Verify Toprank skills are discoverable: openclaw skills check | grep -i toprank
3. Run tests: python3 -m pytest -q openclaw/tests
Sites:
- Register these sites if they are not already in ~/.toprank/openclaw/portfolio.json:
- <site_id_1> with GSC property <gsc_property_1>
- <site_id_2> with GSC property <gsc_property_2>
- If GSC properties are unknown, run:
python3 seo/seo-analysis/scripts/list_gsc_sites.py
Then update each site's ~/.toprank/openclaw/sites/<site_id>/site-profile.json with "gsc_property".
Background wiring:
- Install OpenClaw cron jobs with:
./openclaw/install/install-openclaw-cron.sh --to "<delivery_destination>" --channel "<channel>" --thread-id "<optional_thread_id>"
- If there is no chat delivery target, omit --to; the jobs should be installed with --no-deliver.
- Do not pass --model unless you first verify the model is accepted by this OpenClaw instance's model allowlist.
Smoke test:
1. Run the scheduler once:
TOPRANK_OPENCLAW_HOME="$HOME/.toprank/openclaw" python3 openclaw/bin/run_scheduler.py
2. Run one weekly review:
TOPRANK_OPENCLAW_HOME="$HOME/.toprank/openclaw" python3 openclaw/bin/weekly_review.py "<site_id_1>"
3. Confirm the review wrote audit.json, action-plan.json, and verification.json under ~/.toprank/openclaw/sites/<site_id>/runs/.
4. Confirm openclaw cron list shows Toprank OpenClaw Scheduler and one Toprank Weekly Review job per active site.
Policy:
- This is an SEO operator loop, not an auto-publisher.
- Do not edit websites, CMS content, repos, or publish changes without explicit approval.
- Weekly review jobs may propose actions and write artifacts only.
Report back with:
- installed skill names,
- active sites and GSC properties,
- cron job ids/schedules,
- smoke-test artifact path,
- any blockers.
./openclaw/install/bootstrap-site.sh https://example.comThat creates:
~/.toprank/openclaw/sites/example.com/
├── site-profile.json
├── goals.json
├── latest-state.json
├── learned-patterns.json
├── queue/
├── proposals/
├── runs/
└── feedback/
Before this layer, Toprank had strong point skills.
Now it has memory and recurrence:
- real signal ingestion via GSC analysis,
- persistent state per website,
- scheduled follow-ups instead of forgotten recommendations,
- feedback scoring instead of vague “seems better”,
- learned priors so future prioritization adapts.
That is the step from “SEO assistant” to “SEO operating loop”.
python3 openclaw/bin/onboard_site.py <url> ...— updatesportfolio.json,site-profile.json, andgoals.jsonpython3 openclaw/bin/persist_run.py <site> --payload-file payload.json— writes review artifacts into a timestamped run folder and refresheslatest-state.jsonpython3 openclaw/bin/portfolio_review.py— ranks active sites and writes a portfolio review snapshotpython3 openclaw/bin/weekly_review.py <site>— generates a real weekly review payload from GSC analysis, persists artifacts, and creates a scored follow-up baselinepython3 openclaw/bin/improve_page.py <site> --url <url> ...— persists a page-improvement proposal and follow-up taskpython3 openclaw/bin/investigate_drop.py <site> --summary "..." ...— persists a drop investigation and recovery planpython3 openclaw/bin/followups_due.py— shows which scheduled follow-up items are due nowpython3 openclaw/bin/run_scheduler.py— processes due schedule items, materializes follow-up review artifacts, and surfaces manual-attention workpython3 openclaw/bin/record_followup_metrics.py <site> <item_id> ...— records observed metrics on a queued follow-uppython3 openclaw/bin/hydrate_followup_gsc.py <site> <item_id> ...— pulls real GSC metrics into a queued follow-up using the existingseo-analysisscripts (usessite-profile.jsongsc_propertywhen present, otherwise falls back tocanonical_url)python3 openclaw/bin/score_feedback.py --item-file <queue-item.json>— scores a follow-up as win / neutral / loss / inconclusive
Use these example payloads and flows as templates:
openclaw/artifacts/examples/weekly-review-payload.jsonopenclaw/artifacts/examples/gsc-analysis-sample.jsonopenclaw/artifacts/examples/improve-page-payload.jsonopenclaw/artifacts/examples/investigate-drop-payload.jsonopenclaw/artifacts/examples/scored-feedback-item.json
toprank-site-onboard— register a site and initialize its work foldertoprank-portfolio-review— rank all active sites by urgency/opportunitytoprank-weekly-review— review one site and propose the next best actiontoprank-improve-page— improve one URL on a sitetoprank-investigate-drop— traffic-drop recovery workflow
The closed loop is now made of three concrete runtime pieces:
-
Weekly review from real data
weekly_review.pyruns or reads GSC analysis- generates
audit.json,action-plan.json,verification.json - seeds
baseline_metricsfor later scoring
-
Scheduled follow-up evaluation
run_scheduler.pyrevisits duefeedback_checkitemshydrate_followup_gsc.pycan pull fresh observed metricsscore_feedback.pyclassifieswin/neutral/loss/inconclusive
-
Learning from outcomes
learned-patterns.jsonstores site-level priors- weekly review uses those priors to bias future action ranking
The OpenClaw layer now has a real weekly review runner:
python3 openclaw/bin/weekly_review.py example.comUseful flags:
--gsc-property sc-domain:example.com--analysis-file openclaw/artifacts/examples/gsc-analysis-sample.json
This runner:
- runs or reads GSC analysis,
- builds
audit.json,action-plan.json, andverification.json, - inherits baseline metrics into the follow-up queue item,
- uses
learned-patterns.jsonto bias action ranking, - turns raw signals into a concrete next action plus a measurable follow-up.
The MVP now includes a simple runner for automation:
python3 openclaw/bin/run_scheduler.py
# or
./openclaw/install/run-scheduler.shWhat it does today:
- processes due
feedback_checkitems automatically, - scores them as
win,neutral,loss, orinconclusivewhen metric snapshots exist, - writes follow-up run artifacts,
- marks processed schedule items,
- updates
learned-patterns.jsonwith outcome priors, - surfaces unsupported due items as
ready_for_attention.
Install it with OpenClaw cron:
# With chat delivery for useful weekly-review summaries:
./openclaw/install/install-openclaw-cron.sh \
--to "<delivery_destination>" \
--channel telegram \
--thread-id "<optional_thread_id>"
# Without chat delivery:
./openclaw/install/install-openclaw-cron.shThe OpenClaw cron installer creates:
Toprank OpenClaw Scheduler— hourly follow-up processorToprank Weekly Review — <site>— one weekly review per active portfolio site
It intentionally does not set a model by default. If you want a model override, pass --model <provider/model> only after confirming the local OpenClaw model allowlist accepts it.
macOS launchd is still available as a lower-level alternative:
./openclaw/install/install-launchd.sh --write-only
# inspect the plist, then load it for real:
./openclaw/install/install-launchd.shOr use the system cron example at openclaw/install/toprank-openclaw.cron.example.
This is now the core of a real autonomous operator loop.
The OpenClaw surface is an adapter layer. The existing Toprank skill folders remain canonical. The OpenClaw wrappers add persistent state, portfolio awareness, artifact writing, and policy gates.
