This directory contains examples for using Shannon's OpenAI-compatible API with various SDKs and tools.
-
Get your API key from Shannon
-
Set the environment variable:
export SHANNON_API_KEY="sk-shannon-your-api-key"
-
Run an example:
# Python python python_example.py # LangChain python langchain_example.py # cURL ./curl_examples.sh
Uses the official OpenAI Python SDK to demonstrate:
- Listing available models
- Simple chat completions
- Streaming responses
- Multi-turn conversations
- Deep research queries
Requirements:
pip install openaiUses LangChain with Shannon for:
- Simple invocations
- Message objects
- Prompt templates
- Streaming
- Multi-step chains
Requirements:
pip install langchain-openai langchainShell script demonstrating raw API calls:
- Model listing
- Non-streaming completions
- Streaming with real-time output
- Session management
- Rate limit headers
- Shannon Events (agent thinking, progress)
Requirements:
# jq for JSON parsing (required for this script)
brew install jq # macOS
apt install jq # Ubuntu/DebianShannon extends the OpenAI streaming format with agent lifecycle events in the shannon_events field:
{
"choices": [{"delta": {}}],
"shannon_events": [
{"type": "AGENT_THINKING", "agent_id": "Ryogoku", "message": "Analyzing..."}
]
}import httpx
import json
async def stream_with_events(message: str):
async with httpx.AsyncClient() as client:
async with client.stream(
"POST",
"http://localhost:8080/v1/chat/completions",
headers={"Authorization": "Bearer sk_..."},
json={"model": "shannon-deep-research", "messages": [{"role": "user", "content": message}], "stream": True}
) as response:
async for line in response.aiter_lines():
if line.startswith("data: ") and line != "data: [DONE]":
chunk = json.loads(line[6:])
# Handle content
if delta := chunk["choices"][0].get("delta", {}).get("content"):
print(delta, end="", flush=True)
# Handle Shannon events
for event in chunk.get("shannon_events", []):
print(f"\n[{event['type']}] {event.get('agent_id', '')}: {event.get('message', '')}")| Type | Description |
|---|---|
WORKFLOW_STARTED |
Task begins |
AGENT_STARTED |
Agent activates |
AGENT_THINKING |
Agent reasoning |
PROGRESS |
Step updates |
TOOL_INVOKED |
Tool called |
AGENT_COMPLETED |
Agent done |
| Model | Description | Best For |
|---|---|---|
shannon-chat |
General chat (default) | Conversational AI |
shannon-quick-research |
Fast research | Quick fact-finding |
shannon-deep-research |
Comprehensive research | In-depth analysis |
Environment variables:
SHANNON_API_KEY- Your API key (required)SHANNON_BASE_URL- API base URL (default:https://api.shannon.run/v1)
For local development:
export SHANNON_BASE_URL="http://localhost:8080/v1"The API includes rate limit headers in responses:
X-RateLimit-Limit-Requests- Max requests per minuteX-RateLimit-Remaining-Requests- Remaining requestsX-RateLimit-Reset-Requests- Reset timestamp
For multi-turn conversations, include the X-Session-ID header:
response = client.chat.completions.create(
model="shannon-chat",
messages=[...],
extra_headers={"X-Session-ID": "my-session-123"}
)