This repository is a standalone Laravel application that runs Mixpost Lite and adds a Postgres+pgvector backed ingestion/retrieval system plus an OpenRouter-backed AI content generation pipeline.
If you’re looking for the upstream Mixpost package (not the standalone app), see https://github.com/inovector/mixpost.
- Mixpost Lite (self-hosted social media scheduling/management)
- AI content generation system (prompting, templates, retrieval, strict JSON validation/repair, snapshots + replay)
- Content ingestion pipeline (text/bookmarks/transcripts → knowledge items/chunks → embeddings)
- Retrieval + evaluation harness (ingestion quality reports + generation probes)
- Voice Profiles (derive a voice from selected source posts and apply it during generation)
- Social Watcher package (ingest/normalize social content via Apify; used as a post source for Voice Profiles)
- Billing package (local package providing billing endpoints/workflows)
- Backend: Laravel (PHP)
- DB: PostgreSQL (with
pgvectorextension) - Frontend assets: Vite
- AI provider: OpenRouter (chat/classification/embeddings/image defaults configured in
config/services.php) - Jobs: Laravel queue (required for ingestion + some AI tasks)
Prereqs:
- PHP + Composer
- Node.js + npm
- Docker (for Postgres+pgvector)
Install dependencies:
composer install
npm installConfigure env:
copy .env.example .env
php artisan key:generateStart Postgres+pgvector (see docker-compose.yml):
docker compose up -d
php artisan db:verify-pgvectorRun migrations:
php artisan migrateRun the queue worker (required for ingestion + some AI flows):
php artisan queue:workRun the web app + frontend assets:
php artisan serve
npm run devAll defaults live in .env.example. The most important additions for this app:
- You must use PostgreSQL for the retrieval stack.
- Ensure the
vectorextension is installed in your DB. - Sanity check:
php artisan db:verify-pgvector
Vector knobs:
config/vector.phpcontrols similarity thresholds and per-intent retrieval caps.
Configured in config/services.php under the openrouter key.
Common env vars:
OPENROUTER_API_KEYOPENROUTER_API_URL(default:https://openrouter.ai/api/v1)OPENROUTER_MODEL(chat/generation)OPENROUTER_CLASSIFIER_MODELOPENROUTER_EMBED_MODELOPENROUTER_DEFAULT_MODEL(image generation default)
AI behavior knobs:
config/ai.php(model selection by stage, retrieval weights/heuristics, evaluation options)
The AI stack is designed around a strict, debuggable pipeline:
- classify intent / task
- retrieve relevant knowledge (pgvector similarity + heuristics)
- assemble context (business facts, swipes, voice profile, retrieved chunks)
- render/parse the selected template
- call the LLM (often in JSON-only mode)
- validate + repair output (no partial/truncated JSON)
- persist a snapshot (inputs, options, retrieved items, outputs, scores)
- optionally replay snapshots for debugging and regression checks
Primary implementation entrypoint:
app/Services/Ai/ContentGeneratorService.php
CLI tooling for debugging:
php artisan ai:replay-snapshot {snapshot_id}(seeapp/Console/Commands/ReplaySnapshot.php)php artisan ai:list-snapshots(seeapp/Console/Commands/ListSnapshots.php)php artisan ai:show-prompt ...(seeapp/Console/Commands/ShowPrompt.php)
Detailed docs:
docs/features/content-generator-service.mddocs/features/ai_content_generation_chat_system.mddocs/features/ai-controller-generate-chat-response.mddocs/features/ai_content_generation_and_template_parsing.mddocs/ai_content_generation_refactor_overview.md
The ingestion system turns internal content into searchable, embedded knowledge.
High-level flow:
- Create an ingestion source (text/file/bookmark/transcript)
- Normalize + dedupe
- Create a knowledge item
- Chunk content into knowledge chunks
- Classify chunks (so retrieval can filter/weight)
- Embed chunks into
knowledge_chunks.embedding_vec
Docs:
docs/features/ingestion-pipeline.mddocs/features/ingestion-eval-and-retrieval.md
The eval harness runs ingestion on a single input and produces a structured report (and can optionally run generation probes).
Command:
php artisan ai:ingestion:eval --org=<ORG_UUID> --user=<USER_UUID> --input=<path> --title="..." --format=both --cleanup --log-files --run-generationSee app/Console/Commands/AiIngestionEval.php for options.
This dispatches jobs to extract Business Facts and Swipe Structures from existing content:
php artisan ai:hydrate --type=allSee app/Console/Commands/HydrateAiContext.php.
Voice Profiles let you build a “voice” from example posts and apply it during generation.
- Data lives in
voice_profilesandvoice_profile_posts. - Voice profiles often use Social Watcher normalized posts as training material.
Docs:
docs/features/voice_profiles.md
CLI helper to attach source posts:
php artisan voice:attach-posts --profile=<VOICE_PROFILE_ID> --posts=<comma_separated_normalized_content_ids> --rebuildSee app/Console/Commands/VoiceAttachPosts.php.
Social Watcher lives in packages/social-watcher and provides:
- ingestion from Apify
- normalization into a consistent “NormalizedContent” shape
- API routes (separate from the core
/api/v1routes)
See packages/social-watcher/README.md.
Billing lives in packages/laravel-billing-new and provides backend billing endpoints/workflows.
- See
packages/laravel-billing-new/README.md - Quick start:
packages/laravel-billing-new/QUICKSTART.md
Most API routes are under /api/v1.
- Authoritative map:
routes/api.php - AI endpoints:
app/Http/Controllers/Api/V1/AiController.php
Social Watcher routes are typically exposed under their own prefix; see the package README for details.
- Run
php artisan db:verify-pgvector - Ensure your Postgres container has the extension installed (see the image used in
docker-compose.yml).
- Ingestion and several AI workflows rely on queued jobs.
- Start a worker:
php artisan queue:work
- Confirm
OPENROUTER_API_KEYin.env - Confirm the API base URL is
https://openrouter.ai/api/v1(default inconfig/services.php)