Current Version: 1.14.0
Derived from the GAAC (GitHub-as-a-Context) project.
A Claude Code plugin that provides iterative development with independent AI review. Build with confidence through continuous feedback loops.
RLCR stands for Ralph-Loop with Codex Review, inspired by the official ralph-loop plugin and enhanced with independent Codex review. The name also reads as Reinforcement Learning with Code Review -- reflecting the iterative cycle where AI-generated code is continuously refined through external review feedback.
- Iteration over Perfection -- Instead of expecting perfect output in one shot, Humanize leverages continuous feedback loops where issues are caught early and refined incrementally.
- One Build + One Review -- Claude implements, Codex independently reviews. No blind spots.
- Ralph Loop with Swarm Mode -- Iterative refinement continues until all acceptance criteria are met. Optionally parallelize with Agent Teams.
The loop has two phases: Implementation (Claude works, Codex reviews summaries) and Code Review (Codex checks code quality with severity markers). Issues feed back into implementation until resolved.
# Add humania marketplace
/plugin marketplace add humania-org/humanize
# If you want to use development branch for experimental features
/plugin marketplace add humania-org/humanize#dev
# Then install humanize plugin
/plugin install humanize@humaniaRequires codex CLI for review. See the full Installation Guide for prerequisites and alternative setup options.
-
Generate a plan from your draft:
/humanize:gen-plan --input draft.md --output docs/plan.md
-
Run the loop:
/humanize:start-rlcr-loop docs/plan.md
-
Monitor progress:
source <path/to/humanize>/scripts/humanize.sh humanize monitor rlcr
- Usage Guide -- Commands, options, environment variables
- Install for Claude Code -- Full installation instructions
- Install for Codex -- Codex skill runtime setup
- Install for Kimi -- Kimi CLI skill setup
MIT
