Claude Code slash commands that take a content brief to a published, multi-platform post — with AI-generated images and video.
| CCM (Gemini Flash + Kling v3 via fal.ai) | Runway Pro | InVideo AI | |
|---|---|---|---|
| Per video (5s clip) | ~$0.42 (fal.ai) | $0.39 | $0.67–1.67 |
| Monthly floor | $0 | $35/mo | $25–100/mo |
| Full pipeline | Yes | Video only | Locked to InVideo |
CCM cost is per generated clip (Kling v3 standard via fal.ai at $0.084/sec × 5s). Runway Pro is pay-per-second billed monthly. InVideo cost is subscription-based; per-clip estimate assumes volume. Credit costs are approximate and subject to change.
flowchart LR
subgraph machine["Your Machine"]
CC["Claude Code\n/content:*"]
CH["Chatterbox TTS\n:5002 (swappable)"]
PZ["Postiz\n:5000"]
RM["Remotion\n(local renderer)"]
CF["content/ folder\ndrafts/ • scheduled/"]
end
subgraph external["External APIs"]
FA["fal.ai\n(Kling video + images)"]
GM["Gemini\n(image gen)"]
end
subgraph platforms["Platforms (via Postiz)"]
YT["YouTube"]
X["X / Twitter"]
LI["LinkedIn"]
MORE["+ more"]
end
CC --> CF
CC --> FA
CC --> GM
CC --> CH
CH --> CF
FA --> CF
GM --> CF
CF --> RM
RM --> CF
CC --> PZ
CF --> PZ
PZ --> YT
PZ --> X
PZ --> LI
PZ --> MORE
Data flow: Brief → script → media (images/video/audio) → Remotion render → Postiz schedule → publish
| Skill | What it does |
|---|---|
/content:status |
Checks env vars, pings Postiz and Chatterbox, reports a status table |
/content:create |
Create content from brief to scheduled post — manual guided or AI-single mode |
/content:channel |
Create or edit a channel profile — voice, audience, platforms, best times |
/content:review |
Review pending drafts and schedule approved ones |
/content:analytics |
Pull post analytics from Postiz and run AutoResearch optimization loop |
Copy the .claude/ directory into any Claude Code project:
cp -r .claude/ /path/to/your-project/Set env vars in .env.local (not committed):
POSTIZ_URL=https://your-postiz-domain.com
POSTIZ_API_KEY=your-api-key
FAL_API_KEY=your-fal-key
GEMINI_API_KEY=your-gemini-key
CHATTERBOX_BASE_URL=http://localhost:5002
KNOWLEDGE_BASE_PATH=/path/to/markdown-notes # optionalThen verify everything is connected:
/content:status
Skills activate automatically once .claude/skills/<name>/SKILL.md files are present — no additional imports needed.
- Docker + Docker Compose
- Node.js 18+ (for Remotion)
- GPU with CUDA (for Chatterbox TTS — optional if using captions-only mode)
- Claude Code
Postiz is the scheduling backbone. Self-hosted requires six containers — not just Postgres and Redis.
| Service | Image | Purpose |
|---|---|---|
postiz |
ghcr.io/gitroomhq/postiz-app:latest |
Main app (frontend + backend + nginx) |
postiz-db |
postgres:17-alpine |
Postiz application database |
postiz-redis |
redis:7-alpine |
Queues and OAuth session state |
temporal |
temporalio/auto-setup:1.28.1 |
Workflow orchestration (required) |
temporal-db |
postgres:16-alpine |
Temporal's own dedicated Postgres |
temporal-elasticsearch |
elasticsearch:7.17.27 |
Temporal visibility |
Temporal isolation: Temporal must have its own Postgres instance separate from Postiz's. Sharing causes schema conflicts.
services:
postiz:
image: ghcr.io/gitroomhq/postiz-app:latest
container_name: postiz
restart: unless-stopped
env_file: .env
environment:
MAIN_URL: "https://your-domain.com"
FRONTEND_URL: "https://your-domain.com"
NEXT_PUBLIC_BACKEND_URL: "https://your-domain.com/api"
JWT_SECRET: "generate-with-openssl-rand-hex-32"
DATABASE_URL: "postgresql://postiz:postiz@postiz-db:5432/postiz"
REDIS_URL: "redis://postiz-redis:6379"
BACKEND_INTERNAL_URL: "http://localhost:3000"
TEMPORAL_ADDRESS: "temporal:7233"
IS_GENERAL: "true"
STORAGE_PROVIDER: "local"
UPLOAD_DIRECTORY: "/uploads"
NEXT_PUBLIC_UPLOAD_DIRECTORY: "/uploads"
volumes:
- postiz-uploads:/uploads
- postiz-config:/config
ports:
- "5000:5000"
depends_on:
postiz-db:
condition: service_healthy
postiz-redis:
condition: service_healthy
temporal:
condition: service_started
networks:
- postiz-net
postiz-db:
image: postgres:17-alpine
container_name: postiz-db
restart: unless-stopped
environment:
POSTGRES_USER: postiz
POSTGRES_PASSWORD: postiz
POSTGRES_DB: postiz
volumes:
- postiz-db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postiz"]
interval: 10s
timeout: 5s
retries: 5
networks:
- postiz-net
postiz-redis:
image: redis:7-alpine
container_name: postiz-redis
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- postiz-net
temporal-db:
image: postgres:16-alpine
container_name: temporal-db
restart: unless-stopped
environment:
POSTGRES_USER: temporal
POSTGRES_PASSWORD: temporal
POSTGRES_DB: temporal
volumes:
- temporal-db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U temporal"]
interval: 10s
timeout: 5s
retries: 5
networks:
- postiz-net
temporal-elasticsearch:
image: elasticsearch:7.17.27
container_name: temporal-elasticsearch
restart: unless-stopped
environment:
- cluster.routing.allocation.disk.threshold_enabled=true
- cluster.routing.allocation.disk.watermark.low=512mb
- cluster.routing.allocation.disk.watermark.high=256mb
- cluster.routing.allocation.disk.watermark.flood_stage=128mb
- discovery.type=single-node
- ES_JAVA_OPTS=-Xms256m -Xmx256m
- xpack.security.enabled=false
volumes:
- temporal-es:/usr/share/elasticsearch/data
networks:
- postiz-net
temporal:
image: temporalio/auto-setup:1.28.1
container_name: temporal
restart: unless-stopped
environment:
DB: postgres12
DB_PORT: 5432
POSTGRES_USER: temporal
POSTGRES_PWD: temporal
POSTGRES_SEEDS: temporal-db
DYNAMIC_CONFIG_FILE_PATH: config/dynamicconfig/development-sql.yaml
ENABLE_ES: "true"
ES_SEEDS: temporal-elasticsearch
ES_VERSION: v7
volumes:
- ./dynamicconfig:/etc/temporal/config/dynamicconfig
depends_on:
temporal-db:
condition: service_healthy
temporal-elasticsearch:
condition: service_started
networks:
- postiz-net
volumes:
postiz-db:
postiz-uploads:
postiz-config:
temporal-db:
temporal-es:
networks:
postiz-net:Secrets live in a .env file, not hardcoded in the compose. The compose uses env_file: .env to load them.
# Core
JWT_SECRET=<generate with: openssl rand -hex 32>
DATABASE_URL=postgresql://postiz:postiz@postiz-db:5432/postiz
REDIS_URL=redis://postiz-redis:6379
# YouTube
YOUTUBE_CLIENT_ID=
YOUTUBE_CLIENT_SECRET=
# X (Twitter)
X_API_KEY=
X_API_SECRET=
X_CLIENT_ID=
X_CLIENT_SECRET=
# LinkedIn
LINKEDIN_CLIENT_ID=
LINKEDIN_CLIENT_SECRET=Add credentials as you connect each platform. Restart with docker-compose up -d --force-recreate postiz after editing.
Required by Temporal — create this file alongside docker-compose.yml:
limit.maxIDLength:
- value: 255
constraints: {}
system.forceSearchAttributesCacheRefreshOnRead:
- value: true
constraints: {}Cold start timing: Elasticsearch takes ~30s to be ready. Postiz backend takes ~45s after Temporal connects. Total cold start: ~90s.
docker-compose up -d
# Wait ~90s then check:
docker logs postiz 2>&1 | grep 'running on'
# Should see: Backend is running on: http://localhost:3000Env var changes: Use
docker-compose up -d --force-recreate postiz(notdocker-compose restart) when changing env vars.restartdoes not apply environment changes.
OAuth callbacks require a public HTTPS URL. Cloudflare Tunnel is the recommended approach for self-hosted setups.
# Install cloudflared
curl -fsSL https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o /tmp/cloudflared
sudo install /tmp/cloudflared /usr/local/bin/cloudflared
# Authenticate with Cloudflare
cloudflared tunnel login
# Create tunnel
cloudflared tunnel create postiz
# Configure ~/.cloudflared/config.yml
# tunnel: <tunnel-id>
# credentials-file: ~/.cloudflared/<tunnel-id>.json
# ingress:
# - hostname: postiz.yourdomain.com
# service: http://localhost:5000
# - service: http_status:404
# Create DNS record
cloudflared tunnel route dns postiz postiz.yourdomain.com
# Install as system service
sudo cloudflared --config ~/.cloudflared/config.yml service install
sudo systemctl enable cloudflared
sudo systemctl start cloudflaredOnce Postiz is running and reachable via a public URL, go to Settings → Integrations in the Postiz UI to connect each platform.
After connecting a platform, click the integration row to copy its UUID, then paste it into your content/<channel>/CHANNEL.md Integration IDs table.
- Google Cloud Console → Enable YouTube Data API v3
- Create OAuth 2.0 credentials (Web application)
- Authorized redirect URI:
https://your-domain.com/integrations/social/youtube - Add to Postiz
.env:YOUTUBE_CLIENT_ID,YOUTUBE_CLIENT_SECRET - Add your Google account as a test user in the OAuth consent screen (no need to publish for personal use)
- In Postiz UI: Add integration → YouTube → Connect with Google
- Copy the integration UUID and paste into
CHANNEL.md
- developer.x.com → Create app (pay-per-use plan: ~$0.01/post)
- Enable User authentication settings → OAuth 1.0a + OAuth 2.0, Read and Write
- Callback URI:
https://your-domain.com/integrations/social/x - Add to Postiz
.env:X_API_KEY,X_API_SECRET,X_CLIENT_ID,X_CLIENT_SECRET - In Postiz UI: Add integration → X / Twitter → Connect with X
- Copy the integration UUID and paste into
CHANNEL.md
Note: X post payloads require
settings.who_can_reply_post. Valid values:everyone,following,mentionedUsers,subscribers,verified. The skills handle this automatically.
- developer.linkedin.com → Create app (select LinkedIn as the company page)
- Request products: Share on LinkedIn + Sign In with LinkedIn using OpenID Connect
- Authorized redirect URL:
https://your-domain.com/integrations/social/linkedin - Add to Postiz
.env:LINKEDIN_CLIENT_ID,LINKEDIN_CLIENT_SECRET
Known Postiz bug — re-apply after every
docker-compose pull:Postiz requests org-level scopes (
rw_organization_admin,w_organization_social,r_organization_social) that LinkedIn's Pages API does not grant to personal apps. This causes OAuth to fail. Patch the provider file inside the container:# Remove org scopes from personal LinkedIn provider docker exec postiz sed -i \ "/'rw_organization_admin',/d; /'w_organization_social',/d; /'r_organization_social',/d; /'r_basicprofile',/d" \ /app/apps/backend/dist/libraries/nestjs-libraries/src/integrations/social/linkedin.provider.js \ /app/apps/orchestrator/dist/libraries/nestjs-libraries/src/integrations/social/linkedin.provider.js # Kill old backend process and restart docker exec postiz sh -c 'kill $(ss -tlnp | grep 3000 | grep -oP "pid=\K[0-9]+")' sleep 5 docker exec postiz pm2 restart backendThis patch is lost on container rebuild — re-apply after
docker-compose pull && docker-compose up -d. Watch the Postiz changelog for an upstream fix.
- In Postiz UI: Add integration → LinkedIn → Connect with LinkedIn
- Copy the integration UUID and paste into
CHANNEL.md
POSTIZ_URL should be the base URL only (e.g., https://your-domain.com). The /api/public/v1/ path prefix is added automatically by the scripts, and the API key is sent as a plain Authorization header (no Bearer prefix).
# Test connectivity directly
curl -H "Authorization: $POSTIZ_API_KEY" "$POSTIZ_URL/api/public/v1/integrations"Self-hosted vs. cloud: The
/api/prefix is required for self-hosted. The official Postiz docs show/public/v1/which applies only to Postiz Cloud.
| Symptom | Likely cause | Fix |
|---|---|---|
401 from /api/public/v1/integrations |
Wrong API key | Settings → Developers → Public API → copy key |
| LinkedIn OAuth fails with org scope error | Self-hosted scope issue | Patch linkedin.provider.js (see LinkedIn section above) |
Postiz not ready after docker-compose up |
Cold start takes ~90s | Wait and retry; check docker logs postiz |
| YouTube integration missing | Not connected yet | Add integration in Postiz UI |
| X post fails with 400 | Missing who_can_reply_post |
Use /content:create skill — it sets this automatically |
Chatterbox provides local AI voice synthesis. A GPU with CUDA is required.
- Install from https://github.com/resemble-ai/chatterbox (Python, CUDA GPU required)
- Default port: 5002
- Set
CHATTERBOX_BASE_URL=http://localhost:5002 - Optional: set
CHATTERBOX_START_CMDto a shell command that launches the server —/content:createwill offer to auto-launch it at Step V6.5 if Chatterbox is not reachable;/content:statusreports reachability only
Captions-only mode: If you don't have a compatible GPU, skip Chatterbox entirely. During /content:create video flow, you'll be prompted to choose between narration or captions-only. Captions are rendered from the narration text via Remotion without audio.
fal.ai provides Kling video generation and Flux-based image generation.
- Create an account at https://fal.ai
- Generate an API key in the dashboard
- Set
FAL_API_KEYin.env.local - Check your balance at https://fal.ai/dashboard before running generation — the scripts estimate cost before confirming but cannot read your balance directly
Pricing: Kling v3 standard runs at ~$0.084/second. A 5-second clip costs ~$0.42. The /content:create skill shows a cost estimate and requires confirmation before generating.
Gemini provides image generation via Gemini Flash.
- Get an API key at https://aistudio.google.com/apikey
- Set
GEMINI_API_KEYin.env.local
Gemini Flash has a generous free tier — it's the default image provider and costs nothing for most usage volumes.
Remotion renders the final MP4 video from the storyboard and generated clips.
# From project root
npm installRemotion dependencies are in package.json. No separate setup is needed — the scripts initialize a Remotion workspace in content/_remotion/ automatically on first use.
KNOWLEDGE_BASE_PATH points to a folder of markdown files. When you enter a topic in /content:create, Claude searches this folder and presents matching sources for inclusion in the script. This grounds generated content in your actual thinking rather than generic LLM knowledge.
KNOWLEDGE_BASE_PATH=/home/yourname/vault/learningsUse an absolute path. Tilde (~) expansion is not supported.
Compatible with any markdown PKM:
- MindStone — point at
~/vault/learnings/(processed notes, notdaily/orprojects/) - Obsidian — point at your notes folder or a specific subfolder (e.g.
~/obsidian-vault/Resources/) - Logseq — point at
~/logseq-graph/pages/(Logseq stores all pages as.mdfiles) - Plain markdown folder — any directory of
.mdfiles works
The search uses scripts/lib/kb.js — a grep-based search that ranks files by match count. No indexing, no additional services required.
Performance notes:
- Fast on folders up to ~1,000 files
- For very large vaults (10,000+ files), point at a subfolder rather than the vault root
- Binary files are skipped automatically; hidden files and directories (prefixed with
.) are excluded
Verify setup:
Run /content:status — the Knowledge Base row should show the file count.
Or test directly:
node --input-type=module <<'EOF'
import { suggest } from './scripts/lib/kb.js';
const results = suggest('typescript');
console.log(results.slice(0, 3));
EOFEach channel has its own folder under content/. The scripts manage this structure automatically.
content/
<channel-name>/
CHANNEL.md # channel profile, voice, platform settings, integration UUIDs
channel-stats.json # analytics cache (written by /content:analytics)
scores.json # AutoResearch eval scores
text/
<platform>/
prompt.md # AutoResearch learned prompt overlay (written by /content:analytics)
drafts/
<slug>/
meta.json # piece state machine (tracks status per platform)
script-{platform}.md # per-platform text scripts (e.g. script-linkedin.md)
images/ # generated images (one per platform size)
video/
script.md # video script (YAML scene blocks)
clips/ # generated MP4 clips (Kling or stock footage)
audio/ # narration WAVs (Chatterbox output)
rendered/ # final rendered MP4s (Remotion output)
scheduled/
<slug>/ # piece folder moved here after scheduling succeeds
published/
<slug>/ # piece folder moved here after publish confirmed
A piece stays in drafts/ until all target platforms are successfully scheduled. Partial failures (e.g., one platform fails) leave the piece in drafts/ with meta.json recording per-platform status so /content:review can retry without starting over.
Copy the .claude/ directory from this repo into any Claude Code project root. Skills activate automatically once .claude/skills/<name>/SKILL.md files are present — no @path imports needed. Verify your setup with /content:status.
cp -r .claude/ /path/to/your-project/| Variable | Purpose |
|---|---|
POSTIZ_URL |
Self-hosted Postiz base URL (no trailing slash) |
POSTIZ_API_KEY |
Postiz API key (Settings → Developers → Public API) |
FAL_API_KEY |
fal.ai API key (required for video and fal.ai image generation) |
GEMINI_API_KEY |
Gemini API key (required for Gemini image generation) |
CHATTERBOX_BASE_URL |
Chatterbox TTS base URL (default: http://localhost:5002) |
CHATTERBOX_START_CMD |
Optional shell command to auto-launch Chatterbox from /content:create (Step V6.5) |
KNOWLEDGE_BASE_PATH |
Path to a folder of markdown files for source suggestions (optional) |
PEXELS_API_KEY |
Pexels API key for stock footage search (optional — free tier available at pexels.com/api) |
PIXABAY_API_KEY |
Pixabay API key for stock footage search (optional — free tier available at pixabay.com/api) |
Set these in .env.local (not committed). See .env.example for a template.
API note: Self-hosted Postiz routes API through /api/public/v1/ (not /public/v1/). The Authorization header takes the plain API key — no Bearer prefix. Both of these are handled by the scripts automatically.