A local execution environment for LLM-produced TypeScript snippets. A long-lived Node.js "kernel" preloads internal dependencies and listens on a socket (Unix or TCP). The run-code CLI connects to that kernel, streams one JSON request, and prints the JSON response so agents or humans can shell out without worrying about project setup.
- Kernel (
src/kernel.ts) – boots once per dev machine, keeps shared libraries (AWS SDK,@company/my-lib) warm, and executes transpiled snippets inside the same Node process. - Runner (
src/exec/runner.ts) – pure logic that wraps the snippet body, transpiles TS → JS, and runs it with injected globals. - CLI (
src/cli/run-code.ts) – tiny tool that accepts--code/--file, optional JSON--input, and forwards the request to the kernel via JSONL over a socket.
The contract for LLM code is simple: provide only the body of export async function main(input) { … }, use the injected globals, and return a JSON-serialisable value.
npm install
npm run build
node dist/kernel.js # or: npm run start:kernelBy default the kernel listens on /tmp/llm-kernel.sock. Override with LLM_KERNEL_SOCKET=/tmp/custom.sock or switch to TCP by setting LLM_KERNEL_SOCKET=tcp://127.0.0.1:9000.
Once the kernel is running, invoke run-code directly or from tooling:
run-code \
--code "return { greeting: `hi ${input.name}` };" \
--input '{"name":"Panagiotis"}' \
--socket /tmp/llm-kernel.sock \
--timeout 5000Available flags:
--code <ts>– inline TypeScript body ofasync main.--file <path>– read the body from a file (mutually exclusive with--code).--input <json>– JSON passed asinput; defaults to{}.--socket <path|tcp://host:port|host:port|port>– socket target; falls back toLLM_KERNEL_SOCKETor/tmp/llm-kernel.sock.--timeout <ms>– per-request execution timeout (overrides kernel default).
Success responses print prettified JSON to stdout with exit code 0. Failures print the error/stack to stderr and exit 1.
Every request/response is a single JSON object terminated by \n:
- Snippet gets the body of
export async function main(input) { … }prepended automatically. - No
import/exportstatements are allowed; use globals instead:AwsS3→ namespace import of@aws-sdk/client-s3MyLib→ stubbed internal helper library (packages/my-lib)
- Return values must be JSON-safe; thrown errors become
{ ok: false }replies. - Configure timeouts with
LLM_KERNEL_TIMEOUT_MS(process-wide) or per request viatimeoutMs/--timeout.
src/
├─ kernel.ts # socket server, request loop, timeout handling
├─ kernel.test.ts # kernel integration tests (skipped if sockets unavailable)
├─ cli/
│ └─ run-code.ts # CLI implementation
├─ cli/run-code.test.ts # CLI integration tests
├─ exec/
│ ├─ runner.ts # TS transpile + execution helper
│ └─ runner.test.ts # runner unit tests
└─ types/
├─ endpoint.ts # Unix/TCP endpoint parsing helpers
└─ protocol.ts # shared request/response types + guards
packages/
└─ my-lib # stub of the internal library exposed to snippets
npm test # Vitest (unit + integration)Kernel/CLI integration suites automatically skip if the environment forbids opening sockets (common in CI sandboxes). On a normal developer machine they spin up ephemeral TCP ports and exercise the real protocol end to end.
| Setting | Description |
|---|---|
LLM_KERNEL_SOCKET |
Socket to listen/connect to. Unix path or tcp://host:port. |
LLM_KERNEL_TIMEOUT_MS |
Default timeout enforced by the kernel (optional). |
CLI --timeout |
Overrides timeout for a single request. |
- Hard cancellation (kill hung snippets after timeout instead of only reporting).
- Structured logs / metrics for kernel activity.
- Additional preloaded libraries or per-team presets.