diff --git a/README.md b/README.md
index 745062a..527ff18 100644
--- a/README.md
+++ b/README.md
@@ -1,25 +1,11 @@
-# Mintlify Starter Kit
-
-Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including
-
-- Guide pages
-- Navigation
-- Customizations
-- API Reference pages
-- Use of popular components
+# Chainlit documentation
### 👩💻 Development
-Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command
-
-```
-npm i -g mintlify
-```
-
-Run the following command at the root of your documentation (where mint.json is)
+Run the following command at the root of your documentation (where docs.json is)
```
-mintlify dev
+npx mint dev
```
### 😎 Publishing Changes
@@ -30,5 +16,4 @@ You can also preview changes using PRs, which generates a preview link of the do
#### Troubleshooting
-- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies.
-- Page loads as a 404 - Make sure you are running in a folder with `mint.json`
+See [documentation.](https://mintlify.com/docs/quickstart#troubleshooting)
\ No newline at end of file
diff --git a/advanced-features/ask-user.mdx b/advanced-features/ask-user.mdx
index 92f65ad..40a60db 100644
--- a/advanced-features/ask-user.mdx
+++ b/advanced-features/ask-user.mdx
@@ -2,7 +2,7 @@
title: "Ask User"
---
-The ask APIs prompt the user for input. Depending on the API, the user input can be a string, a file, or pick an action.
+The ask APIs prompt the user for input. Depending on the API, the user input can be a string, a file, pick an action or fill a form.
Until the user provides an input, both the UI and your code will be blocked.
@@ -37,4 +37,34 @@ Until the user provides an input, both the UI and your code will be blocked.
>
Ask the user to pick an action.
+
+ Ask the user to complete a custom form.
+
+
+## Interactive Consent-Gated Forms
+
+The `AskElementMessage` API enables agents to send interactive, consent-gated UI components to users. This feature is particularly useful for:
+
+- **Compliance workflows** where explicit user consent is required
+- **Data review** scenarios where users need to review and modify AI-generated data
+- **Form completion** with pre-filled values for user confirmation
+- **Audit trails** for sensitive operations
+
+The flow works as follows:
+
+1. **Agent** calls a consent-gated tool (e.g., expense logging API)
+2. Backend sends a **CustomElement** to the frontend with editable fields and timeout
+3. **User** modifies or confirms the pre-filled values and submits
+4. Backend receives the **updated props** and proceeds with the tool call using user-approved data
+
+This pattern blocks further chat interactions until user input is received, preventing ambiguous or unauthorized actions.
+
+
+
+
\ No newline at end of file
diff --git a/advanced-features/mcp.mdx b/advanced-features/mcp.mdx
new file mode 100644
index 0000000..906f757
--- /dev/null
+++ b/advanced-features/mcp.mdx
@@ -0,0 +1,189 @@
+---
+title: "MCP"
+description: Model Control Protocol (MCP) allows you to integrate external tool providers with your Chainlit application. This enables your AI models to use tools through standardized interfaces.
+---
+
+## Overview
+
+MCP provides a mechanism for Chainlit applications to connect to either server-sent events (SSE) or streamable HTTP based services, or command-line (stdio) based tools. Once connected, your application can discover available tools, execute them, and integrate their responses into your application's flow.
+
+
+ End to end cookbook example showcasing MCP tool calling with Claude.
+
+
+
+
+
+
+### Contact us for Enterprise Ready MCP
+
+We're working with companies to create their MCP stacks, enabling AI agents to consume their data and context in standardized ways. Fill out this [form](https://docs.google.com/forms/d/e/1FAIpQLSdObSIeIFt4nHppZ6r2rIoEe-jZRo4CqxbmRKKgb-ZsSPONnQ/viewform?usp=dialog).
+
+## Connections Types
+
+| WebSockets | HTTP+SSE | Streamable HTTP | stdio |
+| ---------- | -------- | --------------- | ----- |
+| ❌ | ✅ | ✅ | ✅ |
+
+Chainlit supports three types of MCP connections:
+
+1. **SSE (Server-Sent Events)**: Connect to a remote service via HTTP
+2. **Streamable HTTP**: Send HTTP requests to a server and receive JSON responses or connect using SSE streams
+3. **stdio**: Execute a local command and communicate via standard I/O
+
+> ⚠️ **Security Warning**: The stdio connection type spawns actual subprocesses on the Chainlit server. Only use this with trusted commands in controlled environments. Ensure proper validation of user inputs to prevent command injection vulnerabilities.
+
+**Command Availability Warning**: When using the stdio connection type with commands like `npx` or `uvx`, these commands must be available on the Chainlit server where the application is running. The subprocess is executed on the server, not on the client machine.
+
+### Server-Side Configuration (`config.toml`)
+
+You can control which MCP connection types are enabled globally and restrict allowed stdio commands by modifying your project's `config.toml` file (usually located at the root of your project or `.chainlit/config.toml`).
+
+Under the `[features.mcp]` section, you can configure SSE, Streamable HTTP and stdio separately:
+
+```toml
+[features]
+# ... other feature flags
+
+[features.mcp.sse]
+ # Enable or disable the SSE connection type globally
+ enabled = true
+
+[features.mcp.streamable-http]
+ # Enable or disable the Streamable HTTP connection type globally
+ enabled = true
+
+[features.mcp.stdio]
+ # Enable or disable the stdio connection type globally
+ enabled = true
+ # Define an allowlist of executables for the stdio type.
+ # Only the base names of executables listed here can be used.
+ # This is a crucial security measure for stdio connections.
+ # Example: allows running `npx ...` and `uvx ...` but blocks others.
+ allowed_executables = [ "npx", "uvx" ]
+```
+
+## Setup
+
+### 1. Register Connection Handlers
+
+To use MCP in your Chainlit application, you need to implement the `on_mcp_connect` handler. The `on_mcp_disconnect` handler is optional but recommended for proper cleanup.
+
+```python
+import chainlit as cl
+from mcp import ClientSession
+
+@cl.on_mcp_connect
+async def on_mcp_connect(connection, session: ClientSession):
+ """Called when an MCP connection is established"""
+ # Your connection initialization code here
+ # This handler is required for MCP to work
+
+@cl.on_mcp_disconnect
+async def on_mcp_disconnect(name: str, session: ClientSession):
+ """Called when an MCP connection is terminated"""
+ # Your cleanup code here
+ # This handler is optional
+```
+
+### 2. Client Configuration
+
+The client needs to provide the connection details through the Chainlit interface. This includes:
+
+- Connection name (unique identifier)
+- Client type (`sse`, `streamable-http` or `stdio`)
+- For SSE and Streamable HTTP: URL endpoint
+- For stdio: Full command (e.g., `npx your-tool-package` or `uvx your-tool-package`)
+
+
+
+
+
+## Working with MCP Connections
+
+### Retrieving Available Tools
+
+Upon connection, you can discover the available tools provided by the MCP service:
+
+```python
+@cl.on_mcp_connect
+async def on_mcp(connection, session: ClientSession):
+ # List available tools
+ result = await session.list_tools()
+
+ # Process tool metadata
+ tools = [{
+ "name": t.name,
+ "description": t.description,
+ "input_schema": t.inputSchema,
+ } for t in result.tools]
+
+ # Store tools for later use
+ mcp_tools = cl.user_session.get("mcp_tools", {})
+ mcp_tools[connection.name] = tools
+ cl.user_session.set("mcp_tools", mcp_tools)
+```
+
+### Executing Tools
+
+You can execute tools using the MCP session:
+
+```python
+@cl.step(type="tool")
+async def call_tool(tool_use):
+ tool_name = tool_use.name
+ tool_input = tool_use.input
+
+ # Find appropriate MCP connection for this tool
+ mcp_name = find_mcp_for_tool(tool_name)
+
+ # Get the MCP session
+ mcp_session, _ = cl.context.session.mcp_sessions.get(mcp_name)
+
+ # Call the tool
+ result = await mcp_session.call_tool(tool_name, tool_input)
+
+ return result
+```
+
+## Integrating with LLMs
+
+MCP tools can be seamlessly integrated with LLMs that support tool calling:
+
+```python
+async def call_model_with_tools():
+ # Get tools from all MCP connections
+ mcp_tools = cl.user_session.get("mcp_tools", {})
+ all_tools = [tool for connection_tools in mcp_tools.values() for tool in connection_tools]
+
+ # Call your LLM with the tools
+ response = await your_llm_client.call(
+ messages=messages,
+ tools=all_tools
+ )
+
+ # Handle tool calls if needed
+ if response.has_tool_calls():
+ # Process tool calls
+ pass
+
+ return response
+```
+
+## Session Management
+
+MCP connections are managed at the session level. Each WebSocket session can have multiple named MCP connections. The connections are cleaned up when:
+
+1. The user explicitly disconnects
+2. The same connection name is reused (old connection is replaced)
+3. The WebSocket session ends
\ No newline at end of file
diff --git a/advanced-features/multi-modal.mdx b/advanced-features/multi-modal.mdx
index 8c0ea92..44f7d49 100644
--- a/advanced-features/multi-modal.mdx
+++ b/advanced-features/multi-modal.mdx
@@ -13,20 +13,33 @@ Chainlit let's you access the user's microphone audio stream and process it in r
[@cl.on_audio_chunk](/api-reference/lifecycle-hooks/on-audio-chunk) decorator.
-
+
+
+ Cookbook example showcasing how to use Chainlit with realtime audio APIs.
+
+
+ Cookbook example showcasing speech to text -> answer generation -> text to speech.
+
+
+
+
-Check the [Audio Assistant](https://github.com/Chainlit/cookbook/tree/main/audio-assistant) cookbook example to see how to implement a voice assistant.
-
-### Audio capture settings
-
-You can configure audio capture the au through the Chainlit [config](/backend/config/features) file.
-
## Spontaneous File Uploads
Within the Chainlit application, users have the flexibility to attach any file to their messages. This can be achieved either by utilizing the drag and drop feature or by clicking on the `attach` button located in the chat bar.
@@ -58,10 +71,6 @@ async def on_message(msg: cl.Message):
```
-### Image Processing with Transformers
-
-Multi-modal capabilities are being added to Large Language Model (effectively making them Large Multi Modal Models). OpenAI's [vision API](https://platform.openai.com/docs/guides/vision) and the [LLaVa](https://github.com/Chainlit/cookbook/tree/main/llava) cookbook are good places to start for image processing with transformers.
-
### Disabling Spontaneous File Uploads
If you wish to disable this feature (which would prevent users from attaching files to their messages), you can do so by setting `features.spontaneous_file_upload.enabled=false` in your Chainlit [config](/backend/config/features) file.
diff --git a/advanced-features/streaming.mdx b/advanced-features/streaming.mdx
index 4141104..e5a0805 100644
--- a/advanced-features/streaming.mdx
+++ b/advanced-features/streaming.mdx
@@ -39,7 +39,6 @@ async def main(message: cl.Message):
message_history.append({"role": "user", "content": message.content})
msg = cl.Message(content="")
- await msg.send()
stream = await client.chat.completions.create(
messages=message_history, stream=True, **settings
diff --git a/api-reference/action.mdx b/api-reference/action.mdx
index ff46752..e21c0dd 100644
--- a/api-reference/action.mdx
+++ b/api-reference/action.mdx
@@ -7,28 +7,26 @@ The `Action` class is designed to create and manage actions to be sent and displ
## Attributes
- Name of the action, this should be used in the action_callback
+ Name of the action, this should match the action callback.
-
- The value associated with the action. This is useful to differentiate between
- multiple actions with the same name.
+
+ The payload associated with the action.
+
+
+
+ The lucide icon name for the action button. See https://lucide.dev/icons/.
- The label of the action. This is what the user will see. If not provided the
- name will be used.
+ The label of the action. This is what the user will see. If no label and no icon is provided, the name is display as a fallback.
-
+
The description of the action. This is what the user will see when they hover
the action.
-
- Show the action in a drawer menu
-
-
## Usage
```python
@@ -44,7 +42,7 @@ async def on_action(action):
async def start():
# Sending an action button within a chatbot message
actions = [
- cl.Action(name="action_button", value="example_value", description="Click me!")
+ cl.Action(name="action_button", payload={"value": "example_value"}, label="Click me!")
]
await cl.Message(content="Interact with this action button:", actions=actions).send()
diff --git a/api-reference/ask/ask-for-action.mdx b/api-reference/ask/ask-for-action.mdx
index e657d83..a083018 100644
--- a/api-reference/ask/ask-for-action.mdx
+++ b/api-reference/ask/ask-for-action.mdx
@@ -1,5 +1,5 @@
---
-title: "AskUserAction"
+title: "AskActionMessage"
---
Ask for the user to take an action before continuing.
@@ -14,23 +14,20 @@ If a project ID is configured, the messages will be uploaded to the cloud storag
The list of [Action](/api-reference/action) to prompt the user.
-
+
The author of the message, defaults to the chatbot name defined in your
config.
-
- The number of seconds to wait for an answer before raising a TimeoutError.
-
The number of seconds to wait for an answer before raising a TimeoutError.
-
+
Whether to raise a socketio TimeoutError if the user does not answer in time.
### Returns
-
+
The response of the user.
@@ -45,12 +42,12 @@ async def main():
res = await cl.AskActionMessage(
content="Pick an action!",
actions=[
- cl.Action(name="continue", value="continue", label="✅ Continue"),
- cl.Action(name="cancel", value="cancel", label="❌ Cancel"),
+ cl.Action(name="continue", payload={"value": "continue"}, label="✅ Continue"),
+ cl.Action(name="cancel", payload={"value": "cancel"}, label="❌ Cancel"),
],
).send()
- if res and res.get("value") == "continue":
+ if res and res.get("payload").get("value") == "continue":
await cl.Message(
content="Continue!",
).send()
diff --git a/api-reference/ask/ask-for-element.mdx b/api-reference/ask/ask-for-element.mdx
new file mode 100644
index 0000000..557733f
--- /dev/null
+++ b/api-reference/ask/ask-for-element.mdx
@@ -0,0 +1,183 @@
+---
+title: "AskElementMessage"
+---
+
+Ask for the user to complete a custom element (fill a form) before continuing.
+This allows agents to send interactive, consent-gated UI components to the front end, let users review or edit their values, and submit them back to the backend.
+
+If the user does not answer in time (see timeout), a `TimeoutError` will be raised or `None` will be returned depending on `raise_on_timeout` parameter.
+If a project ID is configured, the messages will be uploaded to the cloud storage.
+
+### Attributes
+
+
+ The content of the message.
+
+
+ The [CustomElement](api-reference/elements/custom) to display to the user for interaction.
+
+
+ The author of the message, defaults to the chatbot name defined in your
+ config.
+
+
+ The number of seconds to wait for an answer before raising a TimeoutError.
+
+
+ Whether to raise a socketio TimeoutError if the user does not answer in time.
+
+
+### Returns
+
+
+ The response from the user containing the submitted element data.
+
+
+### Example
+
+#### Backend: Ask To Fill Jira Ticket Form
+
+```python
+import chainlit as cl
+
+
+@cl.on_chat_start
+async def on_start():
+ element = cl.CustomElement(
+ name="JiraTicket",
+ display="inline",
+ props={
+ "timeout": 20,
+ "fields": [
+ {"id": "summary", "label": "Summary", "type": "text", "required": True},
+ {"id": "description", "label": "Description", "type": "textarea"},
+ {
+ "id": "due",
+ "label": "Due Date",
+ "type": "date",
+ },
+ {
+ "id": "priority",
+ "label": "Priority",
+ "type": "select",
+ "options": ["Low", "Medium", "High"],
+ "value": "Medium",
+ "required": True,
+ },
+ ],
+ },
+ )
+ res = await cl.AskElementMessage(
+ content="Create a new Jira ticket:", element=element, timeout=10
+ ).send()
+ if res and res.get("submitted"):
+ await cl.Message(
+ content=f"Ticket '{res['summary']}' with priority {res['priority']} submitted"
+ ).send()
+```
+
+#### Frontend: Jira Ticket Custom Element Implementation
+
+The custom element should be implemented as a React component that handles form submission. Here's an example for the LogExpense component:
+
+```jsx
+import { Button } from "@/components/ui/button";
+import { Card, CardContent, CardDescription, CardFooter, CardHeader, CardTitle } from "@/components/ui/card";
+import { Input } from "@/components/ui/input";
+import { Label } from "@/components/ui/label";
+import { Select, SelectContent, SelectItem, SelectTrigger, SelectValue } from "@/components/ui/select";
+import { Textarea } from "@/components/ui/textarea";
+import React, { useEffect, useMemo, useState } from 'react';
+
+export default function JiraTicket() {
+ const [timeLeft, setTimeLeft] = useState(props.timeout || 30);
+ const [values, setValues] = useState(() => {
+ const init = {};
+ (props.fields || []).forEach((f) => {
+ init[f.id] = f.value || '';
+ });
+ return init;
+ });
+
+ const allValid = useMemo(() => {
+ if (!props.fields) return true;
+ return props.fields.every((f) => {
+ if (!f.required) return true;
+ const val = values[f.id];
+ return val !== undefined && val !== '';
+ });
+ }, [props.fields, values]);
+
+ useEffect(() => {
+ const interval = setInterval(() => {
+ setTimeLeft((t) => (t > 0 ? t - 1 : 0));
+ }, 1000);
+ return () => clearInterval(interval);
+ }, []);
+
+ const handleChange = (id, val) => {
+ setValues((v) => ({ ...v, [id]: val }));
+ };
+
+ const renderField = (field) => {
+ const value = values[field.id];
+ switch (field.type) {
+ case 'textarea':
+ return