OpenAI-compatible API proxy for the Cursor CLI. Lets any OpenAI client use your Cursor subscription.
- Node.js 20+
- Active Cursor subscription (Pro / Business)
1. Install the Cursor CLI and log in:
# macOS / Linux
curl https://cursor.com/install -fsS | bash
# Windows PowerShell
irm 'https://cursor.com/install?win32=true' | iexagent login # opens browser, sign in with your Cursor account
agent --list-models # verify it worksHeadless? Skip
agent login, generate a key at cursor.com/settings andexport CURSOR_API_KEY=<key>.
2. Install and start the proxy:
npm install -g cursor-agent-api-proxy
cursor-agent-api # starts in background on http://localhost:4646
cursor-agent-api status # check if running3. Verify:
curl http://localhost:4646/healthOther commands:
cursor-agent-api stop # stop
cursor-agent-api restart # restart
cursor-agent-api start 8080 # start on a custom port
cursor-agent-api run # run in foreground (for debugging)Logs: ~/.cursor-agent-api/server.log
If you haven't set up OpenClaw yet, run the onboarding wizard:
openclaw onboardWhen the wizard asks you to configure Model/Auth:
- Provider type → choose Custom Provider (OpenAI-compatible)
- Base URL →
http://localhost:4646/v1 - API Key → type
not-needed(if you ranagent login) - Default model →
auto(or any model fromagent --list-models)
Already have OpenClaw running? Edit the config file directly:
{
env: {
// "not-needed" = already logged in via agent login
// or set your Cursor API Key here to forward it per-request
OPENAI_API_KEY: "not-needed",
OPENAI_BASE_URL: "http://localhost:4646/v1",
},
agents: {
defaults: {
model: { primary: "openai/auto" },
},
},
}Model IDs match agent --list-models output directly:
auto # auto-select
gpt-5.2 # GPT-5.2
gpt-5.3-codex # GPT-5.3 Codex
opus-4.6-thinking # Claude Opus 4.6 (thinking)
sonnet-4.5-thinking # Claude Sonnet 4.5 (thinking)
gemini-3-pro # Gemini 3 ProFull list: curl http://localhost:4646/v1/models or agent --list-models.
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Health check |
/v1/models |
GET | List models |
/v1/chat/completions |
POST | Chat completion (supports stream: true) |
| Env Variable | Default | Description |
|---|---|---|
PORT |
4646 |
Listen port (or cursor-agent-api start 8080) |
CURSOR_API_KEY |
- | Alternative to agent login |
To start the proxy automatically on system boot:
cursor-agent-api install # register as system service
cursor-agent-api uninstall # remove- macOS → LaunchAgent
- Windows → Task Scheduler
- Linux → systemd user service
Python (openai SDK)
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:4646/v1",
api_key="not-needed",
)
resp = client.chat.completions.create(
model="auto",
messages=[{"role": "user", "content": "Hello!"}],
)
print(resp.choices[0].message.content)Continue.dev
{
"models": [{
"title": "Cursor",
"provider": "openai",
"model": "auto",
"apiBase": "http://localhost:4646/v1",
"apiKey": "not-needed"
}]
}curl
curl -X POST http://localhost:4646/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"auto","messages":[{"role":"user","content":"Hello!"}]}'Client → POST /v1/chat/completions (OpenAI format)
→ cursor-agent-api-proxy
→ spawn agent CLI (stream-json)
→ Cursor subscription
→ AI response → OpenAI format → Client
git clone https://github.com/tageecc/cursor-agent-api-proxy.git
cd cursor-agent-api-proxy
pnpm install && pnpm run build
pnpm startMIT