diff --git a/docs/docs/API-Reference/api-files.mdx b/docs/docs/API-Reference/api-files.mdx
index 48b8abfb4856..5c1a04a6ac97 100644
--- a/docs/docs/API-Reference/api-files.mdx
+++ b/docs/docs/API-Reference/api-files.mdx
@@ -5,6 +5,9 @@ slug: /api-files
Use the `/files` endpoints to move files between your local machine and Langflow.
+All `/files` endpoints (both `/v1/files` and `/v2/files`) require authentication with a Langflow API key.
+You can only access files that belong to your own user account, even as a superuser.
+
## Differences between `/v1/files` and `/v2/files`
There are two versions of the `/files` endpoints.
@@ -235,7 +238,7 @@ To send image files to your flows through the API, see [Upload image files (v1)]
:::
This endpoint uploads files to your Langflow server's file management system.
-To use an uploaded file in a flow, send the file path to a flow with a [**File** component](/components-data#file).
+To use an uploaded file in a flow, send the file path to a flow with a [**Read File** component](/read-file).
The default file limit is 1024 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).
@@ -265,10 +268,10 @@ The default file limit is 1024 MB. To configure this value, change the `LANGFLOW
}
```
-2. To use this file in your flow, add a **File** component to your flow.
+2. To use this file in your flow, add a **Read File** component to your flow.
This component loads files into flows from your local machine or Langflow file management.
-3. Run the flow, passing the `path` to the `File` component in the `tweaks` object:
+3. Run the flow, passing the `path` to the `Read-File` component in the `tweaks` object:
```text
curl --request POST \
@@ -280,7 +283,7 @@ This component loads files into flows from your local machine or Langflow file m
"output_type": "chat",
"input_type": "text",
"tweaks": {
- "File-1olS3": {
+ "Read-File-1olS3": {
"path": [
"07e5b864-e367-4f52-b647-a48035ae7e5e/3a290013-fe1e-4d3d-a454-cacae81288f3.pdf"
]
@@ -289,7 +292,7 @@ This component loads files into flows from your local machine or Langflow file m
}'
```
- To get the `File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
+ To get the `Read-File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
If the file path is valid, the flow runs successfully.
diff --git a/docs/docs/API-Reference/api-flows-run.mdx b/docs/docs/API-Reference/api-flows-run.mdx
index 2cc6646aa507..fbc78631beb1 100644
--- a/docs/docs/API-Reference/api-flows-run.mdx
+++ b/docs/docs/API-Reference/api-flows-run.mdx
@@ -175,7 +175,7 @@ curl -X POST \
Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request.
:::tip
-After you add a [**Webhook** component](/components-data#webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook curl** tab to get an automatically generated `POST /webhook` request for your flow.
+After you add a [**Webhook** component](/webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook curl** tab to get an automatically generated `POST /webhook` request for your flow.
For more information, see [Trigger flows with webhooks](/webhook).
:::
diff --git a/docs/docs/API-Reference/api-monitor.mdx b/docs/docs/API-Reference/api-monitor.mdx
index b0b62136db35..0f42edc6a17e 100644
--- a/docs/docs/API-Reference/api-monitor.mdx
+++ b/docs/docs/API-Reference/api-monitor.mdx
@@ -18,9 +18,9 @@ For more information, see the following:
The Vertex build endpoints (`/monitor/builds`) are exclusively for **Playground** functionality.
-When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L143). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.
+When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L130). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.
-The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.
+The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.
### Get Vertex builds
diff --git a/docs/docs/API-Reference/api-reference-api-examples.mdx b/docs/docs/API-Reference/api-reference-api-examples.mdx
index 83b79c583280..718fd993ece0 100644
--- a/docs/docs/API-Reference/api-reference-api-examples.mdx
+++ b/docs/docs/API-Reference/api-reference-api-examples.mdx
@@ -173,12 +173,14 @@ curl -X GET \
### Get configuration
-Returns configuration details for your Langflow deployment:
+Returns configuration details for your Langflow deployment.
+Requires a [Langflow API key](/api-keys-and-authentication).
```bash
curl -X GET \
"$LANGFLOW_SERVER_URL/api/v1/config" \
- -H "accept: application/json"
+ -H "accept: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY"
```
@@ -313,6 +315,7 @@ Other endpoints are helpful for specific use cases, such as administration and f
* POST `/v1/custom_component`: Build a custom component from code and return its node.
* POST `/v1/custom_component/update`: Update an existing custom component's build config and outputs.
* POST `/v1/validate/code`: Validate a Python code snippet for a custom component.
+ * POST `/v1/validate/prompt`: Validate a prompt payload.
@@ -364,14 +367,22 @@ The following endpoints are most often used when contributing to the Langflow co
* MCP servers: The following endpoints are for managing Langflow MCP servers and MCP server connections.
They aren't typically called directly; instead, they are used to drive internal functionality in the Langflow frontend and when running flows that call MCP servers.
- * HEAD `/v1/mcp/sse`: Health check for MCP SSE.
- * GET `/v1/mcp/sse`: Open SSE stream for MCP server events.
- * POST `/v1/mcp/`: Post messages to the MCP server.
+Langflow MCP servers support both streamable HTTP and SSE transport.
+ * HEAD `/v1/mcp/streamable`: Health check for streamable HTTP MCP.
+ * GET `/v1/mcp/streamable`: Open streamable HTTP connection for MCP server.
+ * POST `/v1/mcp/streamable`: Post messages to the MCP server via streamable HTTP.
+ * DELETE `/v1/mcp/streamable`: Close streamable HTTP connection.
+ * HEAD `/v1/mcp/sse` (LEGACY): Health check for MCP SSE.
+ * GET `/v1/mcp/sse` (LEGACY): Open SSE stream for MCP server events.
+ * POST `/v1/mcp/` (LEGACY): Post messages to the MCP server.
* GET `/v1/mcp/project/{project_id}`: List MCP-enabled tools and project auth settings.
- * HEAD `/v1/mcp/project/{project_id}/sse`: Health check for project SSE.
- * GET `/v1/mcp/project/{project_id}/sse`: Open project-scoped MCP SSE.
- * POST `/v1/mcp/project/{project_id}`: Post messages to project MCP server.
- * POST `/v1/mcp/project/{project_id}/` (trailing slash): Same as above.
+ * HEAD `/v1/mcp/project/{project_id}/streamable`: Health check for project streamable HTTP MCP.
+ * GET `/v1/mcp/project/{project_id}/streamable`: Open project-scoped streamable HTTP connection.
+ * POST `/v1/mcp/project/{project_id}/streamable`: Post messages to project MCP server via streamable HTTP.
+ * DELETE `/v1/mcp/project/{project_id}/streamable`: Close project streamable HTTP connection.
+ * HEAD `/v1/mcp/project/{project_id}/sse` (LEGACY): Health check for project SSE.
+ * GET `/v1/mcp/project/{project_id}/sse` (LEGACY): Open project-scoped MCP SSE.
+ * POST `/v1/mcp/project/{project_id}` (LEGACY): Post messages to project MCP server.
* PATCH `/v1/mcp/project/{project_id}`: Update MCP settings for flows and project auth settings.
* POST `/v1/mcp/project/{project_id}/install`: Install MCP client config for Cursor/Windsurf/Claude (local only).
* GET `/v1/mcp/project/{project_id}/installed`: Check which clients have MCP config installed.
@@ -381,6 +392,7 @@ They aren't typically called directly; instead, they are used to drive internal
* POST `/v1/custom_component`: Build a custom component from code and return its node.
* POST `/v1/custom_component/update`: Update an existing custom component's build config and outputs.
* POST `/v1/validate/code`: Validate a Python code snippet for a custom component.
+ * POST `/v1/validate/prompt`: Validate a prompt payload.
diff --git a/docs/docs/Agents/agents-tools.mdx b/docs/docs/Agents/agents-tools.mdx
index 7007acd1208c..d3c269ab2751 100644
--- a/docs/docs/Agents/agents-tools.mdx
+++ b/docs/docs/Agents/agents-tools.mdx
@@ -198,7 +198,7 @@ inputs = [
## Use flows as tools
-An agent can use your other flows as tools with the [**Run Flow** component](/components-logic#run-flow).
+An agent can use your other flows as tools with the [**Run Flow** component](/run-flow).
1. Add a **Run Flow** component to your flow.
2. Select the flow you want the agent to use as a tool.
diff --git a/docs/docs/Agents/agents.mdx b/docs/docs/Agents/agents.mdx
index a3796f7b406f..bf8cfdf1f7c6 100644
--- a/docs/docs/Agents/agents.mdx
+++ b/docs/docs/Agents/agents.mdx
@@ -32,7 +32,7 @@ For more information, see [Agent component parameters](#agent-component-paramete
4. Enter a valid credential for your selected model provider.
Make sure that the credential has permission to call the selected model.
-5. Add [**Chat Input** and **Chat Output** components](/components-io) to your flow, and then connect them to the **Agent** component.
+5. Add [**Chat Input** and **Chat Output** components](/chat-input-and-output) to your flow, and then connect them to the **Agent** component.
At this point, you have created a basic LLM-based chat flow that you can test in the **Playground**.
However, this flow only chats with the LLM.
@@ -40,10 +40,10 @@ Make sure that the credential has permission to call the selected model.

-6. Add **News Search**, **URL**, and **Calculator** components to your flow.
-7. Enable **Tool Mode** in the **News Search**, **URL**, and **Calculator** components:
+6. Add **Web Search**, **URL**, and **Calculator** components to your flow.
+7. Enable **Tool Mode** in the **Web Search**, **URL**, and **Calculator** components:
- 1. Click the **News Search** component to expose the [component's header menu](/concepts-components#component-menus), and then enable **Tool Mode**.
+ 1. Click the **Web Search** component to expose the [component's header menu](/concepts-components#component-menus), and then enable **Tool Mode**.
2. Repeat for the **URL** and **Calculator** components.
3. Connect the **Toolset** port for each tool component to the **Tools** port on the **Agent** component.
@@ -73,7 +73,7 @@ Make sure that the credential has permission to call the selected model.
9. To test a specific tool, ask the agent a question that uses one of the tools, such as `Summarize today's tech news`.
To help you debug and test your flows, the **Playground** displays the agent's tool calls, the provided input, and the raw output the agent received before generating the summary.
- With the given example, the agent should call the **News Search** component's `search_news` action.
+ With the given example, the agent should call the **Web Search** component with **Search Mode** set to **News**.
You've successfully created a basic agent flow that uses some generic tools.
@@ -127,7 +127,7 @@ To attach a component as a tool, you must enable **Tool Mode** on the component
For more information, see [Configure tools for agents](/agents-tools).
:::tip
-To allow agents to use tools from MCP servers, use the [**MCP Tools** component](/components-agents#mcp-connection).
+To allow agents to use tools from MCP servers, use the [**MCP Tools** component](/mcp-tools).
:::
### Agent memory
@@ -143,7 +143,7 @@ By default, the **Agent** component uses your Langflow installation's storage, a
The **Message History** component isn't required for default chat memory, but it is required if you want to use external chat memory like Mem0.
Additionally, the **Message History** component provides more options for sorting, filtering, and limiting memories. Although, most of these options are built-in to the **Agent** component with default values.
-For more information, see [Store chat memory](/memory#store-chat-memory) and [**Message History** component](/components-helpers#message-history).
+For more information, see [Store chat memory](/memory#store-chat-memory) and [**Message History** component](/message-history).
### Additional parameters
diff --git a/docs/docs/Agents/mcp-client.mdx b/docs/docs/Agents/mcp-client.mdx
index f35881345381..24838a8785b2 100644
--- a/docs/docs/Agents/mcp-client.mdx
+++ b/docs/docs/Agents/mcp-client.mdx
@@ -19,8 +19,8 @@ The **MCP Tools** component connects to an MCP server so that a [Langflow agent]
This component has two modes, depending on the type of server you want to access:
-* [Connect to a non-Langflow MCP server](#mcp-stdio-mode) with a JSON configuration file, server start command, or SSE URL to access tools provided by external, non-Langflow MCP servers.
-* [Connect to a Langflow MCP server](#mcp-sse-mode) to use flows from your [Langflow projects](/concepts-flows#projects) as MCP tools.
+* [Connect to a non-Langflow MCP server](#mcp-stdio-mode) with a JSON configuration file, server start command, or HTTP/SSE URL to access tools provided by external, non-Langflow MCP servers.
+* [Connect to a Langflow MCP server](#mcp-http-mode) to use flows from your [Langflow projects](/concepts-flows#projects) as MCP tools.
### Connect to a non-Langflow MCP server {#mcp-stdio-mode}
@@ -33,12 +33,11 @@ This component has two modes, depending on the type of server you want to access
* **JSON**: Paste the MCP server's JSON configuration object into the field, including required and optional parameters that you want to use, and then click **Add Server**.
* **STDIO**: Enter the MCP server's **Name**, **Command**, and any **Arguments** and **Environment Variables** the server uses, and then click **Add Server**.
For example, to start a [Fetch](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch) server, the **Command** is `uvx mcp-server-fetch`.
- * **SSE**: Enter your Langflow MCP server's **Name**, **SSE URL**, and any **Headers** and **Environment Variables** the server uses, and then click **Add Server**.
- The default **SSE URL** is `http://localhost:7860/api/v1/mcp/sse`. For more information, see [Use SSE mode](#mcp-sse-mode).
+ * **HTTP/SSE**: Enter your MCP server's **Name**, **URL**, and any **Headers** and **Environment Variables** the server uses, and then click **Add Server**.
+ The default **URL** for Langflow MCP servers is `http://localhost:7860/api/v1/mcp/project/PROJECT_ID/streamable` or `http://localhost:7860/api/v1/mcp/streamable`. For more information, see [Connect to a Langflow MCP server](#mcp-http-mode).
-
3. To use environment variables in your server command, enter each variable in the **Env** fields as key-value pairs.
:::tip
@@ -51,7 +50,7 @@ This component has two modes, depending on the type of server you want to access
If you select a specific tool, you might need to configure additional tool-specific fields. For information about tool-specific fields, see your MCP server's documentation.
- At this point, the **MCP Tools** component is serving a tool, but nothing is using the tool. The next steps explain how to make the tool available to an [**Agent** component](/components-agents) so that the agent can use the tool in its responses.
+ At this point, the **MCP Tools** component is serving a tool from the connected server, but nothing is using the tool. The next steps explain how to make the tool available to an [**Agent** component](/components-agents) so that the agent can use the tool in its responses.
5. In the [component's header menu](/concepts-components#component-menus), enable **Tool mode** so you can use the component with an agent.
@@ -67,21 +66,27 @@ This component has two modes, depending on the type of server you want to access
8. If you want the agent to be able to use more tools, repeat these steps to add more tools components with different servers or tools.
-### Connect a Langflow MCP server {#mcp-sse-mode}
+### Connect a Langflow MCP server {#mcp-http-mode}
Every Langflow project runs a separate MCP server that exposes the project's flows as MCP tools.
For more information about your projects' MCP servers, including exposing flows as MCP tools, see [Use Langflow as an MCP server](/mcp-server).
-To leverage flows-as-tools, use the **MCP Tools** component in **Server-Sent Events (SSE)** mode to connect to a project's `/api/v1/mcp/sse` endpoint:
+Langflow MCP servers support both the **streamable HTTP** transport and **Server-Sent Events (SSE)** as a fallback.
+
+To leverage flows-as-tools, use the **MCP Tools** component to connect to a project's MCP endpoint:
+
+1. Add an **MCP Tools** component to your flow, click **Add MCP Server**, and then select **HTTP/SSE** mode.
+2. In the **MCP URL** field, enter your Langflow server's MCP endpoint.
+ - For project-specific servers: `http://localhost:7860/api/v1/mcp/project/PROJECT_ID/streamable`
+ - For global MCP server: `http://localhost:7860/api/v1/mcp/streamable`
+ - Default for Langflow Desktop: `http://localhost:7868/`
-1. Add an **MCP Tools** component to your flow, click **Add MCP Server**, and then select **SSE** mode.
-2. In the **MCP SSE URL** field, modify the default address to point at your Langflow server's SSE endpoint. The default value for other Langflow installations is `http://localhost:7860/api/v1/mcp/sse`.
-In SSE mode, all flows available from the targeted server are treated as tools.
+ All flows available from the targeted server are treated as tools.
3. In the [component's header menu](/concepts-components#component-menus), enable **Tool Mode** so you can use the component with an agent.
4. Connect the **MCP Tools** component's **Toolset** port to an **Agent** component's **Tools** port.
5. If not already present in your flow, make sure you also attach **Chat Input** and **Chat Output** components to the **Agent** component.
- 
+ 
6. Test your flow to make sure the agent uses your flows to respond to queries. Open the **Playground**, and then enter a prompt that uses a flow that you connected through the **MCP Tools** component.
@@ -91,9 +96,11 @@ In SSE mode, all flows available from the targeted server are treated as tools.
| Name | Type | Description |
|------|------|-------------|
-| command | String | Input parameter. Stdio mode only. The MCP server startup command. Default: `uvx mcp-sse-shim@latest`. |
-| sse_url | String | Input parameter. SSE mode only. The SSE URL for a Langflow project's MCP server. Default for Langflow Desktop: `http://localhost:7868/`. Default for other installations: `http://localhost:7860/api/v1/mcp/sse` |
-| tools | List[Tool] | Output parameter. [`Tool`](/data-types#tool) object containing a list of tools exposed by the MCP server. |
+| mcp_server | String | Input parameter. The MCP server to connect to. Select from previously configured servers or add a new one. |
+| tool | String | Input parameter. The specific tool to execute from the connected MCP server. Leave blank to allow access to all tools. |
+| use_cache | Boolean | Input parameter. Enable caching of MCP server and tools to improve performance. Default: `false`. |
+| verify_ssl | Boolean | Input parameter. Enable SSL certificate verification for HTTPS connections. Default: `true`. |
+| response | DataFrame | Output parameter. [`DataFrame`](/data-types#dataframe) containing the response from the executed tool. |
## Manage connected MCP servers
diff --git a/docs/docs/Agents/mcp-component-astra.mdx b/docs/docs/Agents/mcp-component-astra.mdx
index 8ef93c7fe5a1..41c159c73e9a 100644
--- a/docs/docs/Agents/mcp-component-astra.mdx
+++ b/docs/docs/Agents/mcp-component-astra.mdx
@@ -26,14 +26,16 @@ This guide demonstrates how to [use Langflow as an MCP client](/mcp-client) by u
7. Configure the **MCP Tools** component as follows:
- 1. Select **Stdio** mode.
- 2. In the **MCP server** field, add the following code to connect to an Astra DB MCP server:
+ 1. In the **MCP Server** field, click **Add MCP Server**.
+ 2. Select **Stdio** mode.
+ 3. EIn the **Name** field, enter a name for the MCP server.
+ 4. In the **Commmand** field, add the following code to connect to an Astra DB MCP server:
```bash
npx -y @datastax/astra-db-mcp
```
- 3. In the **Env** fields, add variables for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT` with the values from your Astra database.
+ 5. In the **Environment Variables** fields, add variables for `ASTRA_DB_APPLICATION_TOKEN` and `ASTRA_DB_API_ENDPOINT` with the values from your Astra database.
:::info
Environment variables declared in your Langflow `.env` file can be referenced in your MCP server commands, but you cannot reference global variables declared in Langflow.
diff --git a/docs/docs/Agents/mcp-server.mdx b/docs/docs/Agents/mcp-server.mdx
index 4b9118eae675..b4ff6836b162 100644
--- a/docs/docs/Agents/mcp-server.mdx
+++ b/docs/docs/Agents/mcp-server.mdx
@@ -11,11 +11,13 @@ Langflow integrates with the [Model Context Protocol (MCP)](https://modelcontext
This page describes how to use Langflow as an MCP server that exposes your flows as [tools](https://modelcontextprotocol.io/docs/concepts/tools) that [MCP clients](https://modelcontextprotocol.io/clients) can use when generating responses.
+Langflow MCP servers support both the **streamable HTTP** transport and **Server-Sent Events (SSE)** as a fallback.
+
For information about using Langflow as an MCP client and managing MCP server connections within flows, see [Use Langflow as an MCP client](/mcp-client).
## Prerequisites
-* A [Langflow project](/concepts-flows#projects) with at least one flow that has a [**Chat Output** component](/components-io#chat-output).
+* A [Langflow project](/concepts-flows#projects) with at least one flow that has a [**Chat Output** component](/chat-input-and-output).
The **Chat Output** component is required to use a flow as an MCP tool.
@@ -25,10 +27,20 @@ For information about using Langflow as an MCP client and managing MCP server co
## Serve flows as MCP tools {#select-flows-to-serve}
-Each [Langflow project](/concepts-flows#projects) has an MCP server that exposes the project's flows as tools for use by MCP clients.
+When you create a [Langflow project](/concepts-flows#projects), Langflow automatically adds the project to your MCP server's configuration and makes the project's flows available as MCP tools.
+
+If your Langflow server has authentication enabled (`AUTO_LOGIN=false`), the project's MCP server is automatically configured with API key authentication, and a new API key is generated specifically for accessing the new project's flows.
+For more information, see [MCP server authentication](#authentication).
+
+
+### Prevent automatic MCP server configuration for Langflow projects
-By default, all flows in a project are exposed as tools on the project's MCP server.
-You can change the exposed flows and tool metadata by managing the MCP server settings:
+To disable automatic MCP server configuration for new projects, set the `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` environment variable to `false`.
+For more information, see [MCP server environment variables](#mcp-server-environment-variables).
+
+### Selectively enable and disable MCP servers for Langflow projects
+
+With or without automatic MCP server configuration enabled, you can selectively enable and disable the projects that are exposed as MCP tools:
1. Click the **MCP Server** tab on the [**Projects** page](/concepts-flows#projects), or, when editing a flow, click **Share**, and then select **MCP Server**.
@@ -36,7 +48,7 @@ You can change the exposed flows and tool metadata by managing the MCP server se
The **Flows/Tools** section lists the flows that are currently being served as tools on this MCP server.
-2. To toggle exposed flows, click **Edit Tools**, and then select the flows that you want exposed as tools.
+2. To toggle exposed flows, click **Edit Tools**, and then select the flows that you want exposed as tools.
To prevent a flow from being used as a tool, clear the checkbox in the first column.
3. Close the **MCP Server Tools** dialog to save your changes.
@@ -52,7 +64,7 @@ To edit the names and descriptions of flow tools on a Langflow MCP server, do th
1. Click the **MCP Server** tab on the [**Projects** page](/concepts-flows#projects), or, when editing a flow, click **Share**, and then select **MCP Server**.
-2. Click **Edit Tools**.
+2. Click **Edit Tools**.
3. Click the **Description** or **Tool** that you want to edit:
@@ -134,14 +146,14 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
- "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/sse"
+ "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
]
}
}
}
```
- The **MCP Server** tab automatically populates the `PROJECT_NAME`, `LANGFLOW_SERVER_ADDRESS`, and `PROJECT_ID` values.
+ The **MCP Server** tab automatically populates the `LANGFLOW_SERVER_ADDRESS` and `PROJECT_ID` values.
The default Langflow server address is `http://localhost:7860`.
If you are using a [public Langflow server](/deployment-public-server), the server address is automatically included.
@@ -158,7 +170,7 @@ For example:
"command": "uvx",
"args": [
"mcp-proxy",
- "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/sse"
+ "http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable"
],
"env": {
"KEY": "VALUE"
@@ -207,6 +219,8 @@ For more information, see the MCP documentation for your client, such as [Cursor
Each [Langflow project](/concepts-flows#projects) has its own MCP server with its own MCP server authentication settings.
+When you create a new project, Langflow automatically configures authentication for the project's MCP server based on your Langflow server's authentication settings. If authentication is enabled (`AUTO_LOGIN=false`), the project is automatically configured with API key authentication, and a new API key is generated for accessing the project's flows.
+
To configure authentication for a Langflow MCP server, go to the **Projects** page in Langflow, click the **MCP Server** tab, click **Edit Auth**, and then select your preferred authentication method:
@@ -287,6 +301,7 @@ The following environment variables set behaviors related to your Langflow proje
| `LANGFLOW_MCP_SERVER_ENABLE_PROGRESS_NOTIFICATIONS` | Boolean | `False` | If `true`, Langflow MCP servers send progress notifications. |
| `LANGFLOW_MCP_SERVER_TIMEOUT` | Integer | `20` | The number of seconds to wait before an MCP server operation expires due to poor connectivity or long-running requests. |
| `LANGFLOW_MCP_MAX_SESSIONS_PER_SERVER` | Integer | `10` | Maximum number of MCP sessions to keep per unique server. |
+| `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` | Boolean | `True` | Whether to automatically add newly created projects to the user's MCP servers configuration. If `false`, projects must be manually added to MCP servers. |
{/* The anchor on this section (deploy-your-server-externally) is currently a link target in the Langflow UI. Do not change. */}
### Deploy your Langflow MCP server externally {#deploy-your-server-externally}
@@ -321,7 +336,7 @@ The default address is `http://localhost:6274`.
- **Command**: `uvx`
- **Arguments**: Enter the following list of arguments, separated by spaces. Replace the values for `YOUR_API_KEY`, `LANGFLOW_SERVER_ADDRESS`, and `PROJECT_ID` with the values from your Langflow MCP server. For example:
```bash
- mcp-proxy --headers x-api-key YOUR_API_KEY http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/sse
+ mcp-proxy --headers x-api-key YOUR_API_KEY http://LANGFLOW_SERVER_ADDRESS/api/v1/mcp/project/PROJECT_ID/streamable
```
@@ -338,9 +353,9 @@ The default address is `http://localhost:6274`.
- **Transport Type**: Select **SSE**.
- - **URL**: Enter the Langflow MCP server's `sse` endpoint. For example:
+ - **URL**: Enter the Langflow MCP server's endpoint. For example:
```bash
- http://localhost:7860/api/v1/mcp/project/d359cbd4-6fa2-4002-9d53-fa05c645319c/sse
+ http://localhost:7860/api/v1/mcp/project/d359cbd4-6fa2-4002-9d53-fa05c645319c/streamable
```
diff --git a/docs/docs/Components/api-request.mdx b/docs/docs/Components/api-request.mdx
new file mode 100644
index 000000000000..c14eed47a251
--- /dev/null
+++ b/docs/docs/Components/api-request.mdx
@@ -0,0 +1,40 @@
+---
+title: API Request
+slug: /api-request
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **API Request** component constructs and sends HTTP requests using URLs or curl commands:
+
+* **URL mode**: Enter one or more comma-separated URLs, and then select the method for the request to each URL.
+* **curl mode**: Enter the curl command to execute.
+
+You can enable additional request options and fields in the component's parameters.
+
+Returns a [`Data` object](/data-types#data) containing the response.
+
+For provider-specific API components, see [**Bundles**](/components-bundle-components).
+
+## API Request parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| mode | Mode | Input parameter. Set the mode to either **URL** or **curl**. |
+| urls | URL | Input parameter. Enter one or more comma-separated URLs for the request. |
+| curl | curl | Input parameter. **curl mode** only. Enter a complete curl command. Other component parameters are populated from the command arguments. |
+| method | Method | Input parameter. The HTTP method to use. |
+| query_params | Query Parameters | Input parameter. The query parameters to append to the URL. |
+| body | Body | Input parameter. The body to send with POST, PATCH, and PUT requests as a dictionary. |
+| headers | Headers | Input parameter. The headers to send with the request as a dictionary. |
+| timeout | Timeout | Input parameter. The timeout to use for the request. |
+| follow_redirects | Follow Redirects | Input parameter. Whether to follow HTTP redirects. Starting in Langflow version 1.7, the **Follow Redirects** parameter is disabled (`false`) by default to prevent SSRF bypass attacks where a public URL redirects to internal resources. Only enable redirects if you trust the target server. For more information, see [SSRF protection environment variables](/api-keys-and-authentication#ssrf-protection). |
+| save_to_file | Save to File | Input parameter. Whether to save the API response to a temporary file. Default: Disabled (`false`) |
+| include_httpx_metadata | Include HTTPx Metadata | Input parameter. Whether to include properties such as `headers`, `status_code`, `response_headers`, and `redirection_history` in the output. Default: Disabled (`false`) |
+
diff --git a/docs/docs/Components/batch-run.mdx b/docs/docs/Components/batch-run.mdx
new file mode 100644
index 000000000000..8d813bc0c63f
--- /dev/null
+++ b/docs/docs/Components/batch-run.mdx
@@ -0,0 +1,65 @@
+---
+title: Batch Run
+slug: /batch-run
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Batch Run** component runs a language model over _each row of one text column_ in a [`DataFrame`](/data-types#dataframe), and then returns a new `DataFrame` with the original text and an LLM response.
+The output contains the following columns:
+
+* `text_input`: The original text from the input `DataFrame`
+* `model_response`: The model's response for each input
+* `batch_index`: The 0-indexed processing order for all rows in the `DataFrame`
+* `metadata` (optional): Additional information about the processing
+
+## Use the Batch Run component in a flow
+
+If you pass the **Batch Run** output to a [**Parser** component](/parser), you can use variables in the parsing template to reference these keys, such as `{text_input}` and `{model_response}`.
+This is demonstrated in the following example.
+
+
+
+1. Connect any language model component to a **Batch Run** component's **Language model** port.
+
+2. Connect `DataFrame` output from another component to the **Batch Run** component's **DataFrame** input.
+For example, you could connect a **Read File** component with a CSV file.
+
+3. In the **Batch Run** component's **Column Name** field, enter the name of the column in the incoming `DataFrame` that contains the text to process.
+For example, if you want to extract text from a `name` column in a CSV file, enter `name` in the **Column Name** field.
+
+4. Connect the **Batch Run** component's **Batch Results** output to a **Parser** component's **DataFrame** input.
+
+5. Optional: In the **Batch Run** [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
+For example, `Create a business card for each name.`
+
+6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `DataFrame` columns (`text_input`, `model_response`, and `batch_index`):
+
+ For example, this template uses three columns from the resulting, post-batch `DataFrame`:
+
+ ```text
+ record_number: {batch_index}, name: {text_input}, summary: {model_response}
+ ```
+
+7. To test the processing, click the **Parser** component, click **Run component**, and then click **Inspect output** to view the final `DataFrame`.
+
+ You can also connect a **Chat Output** component to the **Parser** component if you want to see the output in the **Playground**.
+
+## Batch Run parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| model | HandleInput | Input parameter. Connect the 'Language Model' output from a language model component. Required. |
+| system_message | MultilineInput | Input parameter. A multi-line system instruction for all rows in the DataFrame. |
+| df | DataFrameInput | Input parameter. The DataFrame whose column is treated as text messages, as specified by 'column_name'. Required. |
+| column_name | MessageTextInput | Input parameter. The name of the DataFrame column to treat as text messages. If empty, all columns are formatted in TOML. |
+| output_column_name | MessageTextInput | Input parameter. Name of the column where the model's response is stored. Default=`model_response`. |
+| enable_metadata | BoolInput | Input parameter. If `True`, add metadata to the output DataFrame. |
+| batch_results | DataFrame | Output parameter. A DataFrame with all original columns plus the model's response column. |
+
diff --git a/docs/docs/Components/bundles-aiml.mdx b/docs/docs/Components/bundles-aiml.mdx
index ec2815347b66..d4da5126b10e 100644
--- a/docs/docs/Components/bundles-aiml.mdx
+++ b/docs/docs/Components/bundles-aiml.mdx
@@ -13,7 +13,7 @@ This page describes the components that are available in the **AI/ML** bundle.
## AI/ML API text generation
This component creates a `ChatOpenAI` model instance using the AI/ML API.
-The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Function** component.
+The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Transform** component.
For more information, see the [AI/ML API Langflow integration documentation](https://docs.aimlapi.com/integrations/langflow) and [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-altk.mdx b/docs/docs/Components/bundles-altk.mdx
new file mode 100644
index 000000000000..5277398a02f7
--- /dev/null
+++ b/docs/docs/Components/bundles-altk.mdx
@@ -0,0 +1,41 @@
+---
+title: ALTK
+slug: /bundles-altk
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+ [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.
+
+:::tip
+The ALTK contains features called _components_. These are different from Langflow components. All components within ALTK are available through Langflow's **ALTK Agent** component.
+:::
+
+The **ALTK Agent** implements components from the [Agent Lifecycle Toolkit](https://github.com/AgentToolkit/agent-lifecycle-toolkit). ALTK features can be enabled or disabled independently.
+
+* **Pre-tool validation**: Validates tool calls before execution to check for appropriateness and correctness using the [SPARC](https://agenttoolkit.github.io/agent-lifecycle-toolkit/concepts/components/sparc/) reflection component. This validation prevents agents from executing invalid tool calls.
+
+* **Post-tool JSON processing**: Processes large JSON tool responses by generating Python code on the fly to extract relevant data. This helps reduce context size and improves the agent's ability to work with large tool responses, especially when dealing with APIs that return extensive JSON data. The component outputs a [Message](/data-types#message) containing the agent's response, which is passed to the next component in the flow instead of the full JSON data that isn't needed.
+
+For more information, see the [Agent Lifecycle Toolkit documentation](https://agenttoolkit.github.io/agent-lifecycle-toolkit/).
+
+For an example of the ALTK component in Langflow, see the video tutorial [ALTK in Langflow: Reliably handle JSON responses in your AI agent](https://www.youtube.com/watch?v=YNwPBK_KxXY).
+
+### ALTK Agent parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| agent_llm | Dropdown | Input parameter. The model provider the agent uses to generate responses. |
+| enable_tool_validation | Boolean | Input parameter. If enabled, tool calls are validated using SPARC before execution to check for appropriateness and correctness. Default: `true`. |
+| enable_post_tool_reflection | Boolean | Input parameter. If enabled, tool outputs are automatically processed through JSON processing when the output is JSON and exceeds the size threshold. Default: `true`. |
+| response_processing_size_threshold | Integer | Input parameter. Tool output is post-processed only if the response length exceeds this character threshold. Default: `100`. Advanced parameter. |
+| tools | List[Tool] | Input parameter. The list of tools available to the agent. |
+| system_prompt | String | Input parameter. The system prompt to provide context to the agent. |
+| input_value | String | Input parameter. The user's input to the agent. |
+| memory | Memory | Input parameter. The memory for the agent to use for context persistence. |
+| max_iterations | Integer | Input parameter. The maximum number of iterations to allow the agent to execute. |
+| verbose | Boolean | Input parameter. This determines whether to print out the agent's intermediate steps. |
+| handle_parsing_errors | Boolean | Input parameter. This determines whether to handle parsing errors in the agent. |
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-amazon.mdx b/docs/docs/Components/bundles-amazon.mdx
index fd75ad8d67d1..a41ad01d5ad6 100644
--- a/docs/docs/Components/bundles-amazon.mdx
+++ b/docs/docs/Components/bundles-amazon.mdx
@@ -10,34 +10,39 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
This page describes the components that are available in the **Amazon** bundle.
-## Amazon Bedrock
+## Amazon Bedrock Converse
-This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock).
+This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock) with the Bedrock Converse API.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Specifically, the **Language Model** output is an instance of [`ChatBedrock`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.
+Specifically, the **Language Model** output is an instance of [`ChatBedrockConverse`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
-### Amazon Bedrock parameters
+### Amazon Bedrock Converse parameters
| Name | Type | Description |
|------|------|-------------|
-| input | String | Input parameter. The input string for text generation. |
+| input_value | String | Input parameter. The input string for text generation. |
| system_message | String | Input parameter. A system message to pass to the model. |
| stream | Boolean | Input parameter. Whether to stream the response. Only works in chat. Default: `false`. |
-| model_id | String | Input parameter. The Amazon Bedrock model to use. |
-| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. |
-| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. |
-| aws_session_token | SecretString | Input parameter. The session key for your AWS account. |
-| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. |
+| model_id | String | Input parameter. The Amazon Bedrock model to use.|
+| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. Required. |
+| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. Required. |
+| aws_session_token | SecretString | Input parameter. The session key for your AWS account. Only needed for temporary credentials. |
+| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. If not provided, the default profile will be used. |
| region_name | String | Input parameter. AWS region where your Bedrock resources are located. Default: `us-east-1`. |
-| model_kwargs | Dictionary | Input parameter. Additional keyword arguments to pass to the model. |
| endpoint_url | String | Input parameter. Custom endpoint URL for a Bedrock service. |
+| temperature | Float | Input parameter. Controls randomness in output. Higher values make output more random. Default: `0.7`. |
+| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Default: `4096`. |
+| top_p | Float | Input parameter. Nucleus sampling parameter. Controls diversity of output. Default: `0.9`. |
+| top_k | Integer | Input parameter. Limits the number of highest probability vocabulary tokens to consider. Note: Not all models support top_k. Default: `250`. |
+| disable_streaming | Boolean | Input parameter. If True, disables streaming responses. Useful for batch processing. Default: `false`. |
+| additional_model_fields | Dictionary | Input parameter. Additional model-specific parameters for fine-tuning behavior. |
## Amazon Bedrock Embeddings
@@ -62,7 +67,7 @@ For more information about using embedding model components in flows, see [Embed
## S3 Bucket Uploader
The **S3 Bucket Uploader** component uploads files to an Amazon S3 bucket.
-It is designed to process `Data` input from a **File** or **Directory** component.
+It is designed to process `Data` input from a **Read File** or **Directory** component.
If you upload `Data` from other components, test the results before running the flow in production.
Requires the `boto3` package, which is included in your Langflow installation.
@@ -81,4 +86,22 @@ The component produces logs but it doesn't emit output to the flow.
| **Strategy for file upload** | String | Input parameter. The file upload strategy. **Store Data** (default) iterates over `Data` inputs, logs the file path and text content, and uploads each file to the specified S3 bucket if both file path and text content are available. **Store Original File** iterates through the list of data inputs, retrieves the file path from each data item, uploads the file to the specified S3 bucket if the file path is available, and logs the file path being uploaded. |
| **Data Inputs** | Data | Input parameter. The `Data` input to iterate over and upload as files in the specified S3 bucket. |
| **S3 Prefix** | String | Input parameter. Optional prefix (folder path) within the S3 bucket where files will be uploaded. |
-| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |
\ No newline at end of file
+| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |
+
+## Legacy Amazon components
+
+import PartialLegacy from '@site/docs/_partial-legacy.mdx';
+
+
+
+The following Amazon components are in legacy status:
+
+
+Amazon Bedrock
+
+The **Amazon Bedrock** component was deprecated in favor of the **Amazon Bedrock Converse** component, which uses the Bedrock Converse API for conversation handling.
+
+To use Amazon Bedrock models in your flows, use the [**Amazon Bedrock Converse**](#amazon-bedrock-converse) component instead.
+
+
+
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-anthropic.mdx b/docs/docs/Components/bundles-anthropic.mdx
index 1e0340ec212f..0c3f3f115319 100644
--- a/docs/docs/Components/bundles-anthropic.mdx
+++ b/docs/docs/Components/bundles-anthropic.mdx
@@ -19,7 +19,7 @@ The **Anthropic** component generates text using Anthropic Chat and Language mod
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatAnthropic`](https://docs.langchain.com/oss/python/integrations/chat/anthropic) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-arxiv.mdx b/docs/docs/Components/bundles-arxiv.mdx
index 433b54f23767..d1638a256fb9 100644
--- a/docs/docs/Components/bundles-arxiv.mdx
+++ b/docs/docs/Components/bundles-arxiv.mdx
@@ -25,4 +25,4 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
## See also
-* [**Web Search** component](/components-data#web-search)
\ No newline at end of file
+* [**Web Search** component](/web-search)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-azure.mdx b/docs/docs/Components/bundles-azure.mdx
index 2d5faf1d091b..3d20c8d0078f 100644
--- a/docs/docs/Components/bundles-azure.mdx
+++ b/docs/docs/Components/bundles-azure.mdx
@@ -17,7 +17,7 @@ This component generates text using [Azure OpenAI LLMs](https://learn.microsoft.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`AzureChatOpenAI`](https://docs.langchain.com/oss/python/integrations/chat/azure_chat_openai) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-baidu.mdx b/docs/docs/Components/bundles-baidu.mdx
index 9bffcc6a8d35..3dbffb20683f 100644
--- a/docs/docs/Components/bundles-baidu.mdx
+++ b/docs/docs/Components/bundles-baidu.mdx
@@ -15,6 +15,6 @@ The **Qianfan** component generates text using Qianfan's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models) and the [Qianfan documentation](https://github.com/baidubce/bce-qianfan-sdk).
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-bing.mdx b/docs/docs/Components/bundles-bing.mdx
index 1e640f92b99e..0d7818be1ab0 100644
--- a/docs/docs/Components/bundles-bing.mdx
+++ b/docs/docs/Components/bundles-bing.mdx
@@ -29,5 +29,5 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
## See also
-* [**Web Search** component](/components-data#web-search)
+* [**Web Search** component](/web-search)
* [**SearchApi** bundle](/bundles-searchapi)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-cassandra.mdx b/docs/docs/Components/bundles-cassandra.mdx
index 0253eae37947..5b49307d442e 100644
--- a/docs/docs/Components/bundles-cassandra.mdx
+++ b/docs/docs/Components/bundles-cassandra.mdx
@@ -68,7 +68,7 @@ The **Cassandra Chat Memory** component retrieves and stores chat messages using
Chat memories are passed between memory storage components as the [`Memory`](/data-types#memory) data type.
Specifically, the component creates an instance of `CassandraChatMessageHistory`, which is a LangChain chat message history class that uses a Cassandra database for storage.
-For more information about using external chat memory in flows, see the [**Message History** component](/components-helpers#message-history).
+For more information about using external chat memory in flows, see the [**Message History** component](/message-history).
### Cassandra Chat Memory parameters
diff --git a/docs/docs/Components/bundles-chroma.mdx b/docs/docs/Components/bundles-chroma.mdx
index c720c0ffda30..63779a79157e 100644
--- a/docs/docs/Components/bundles-chroma.mdx
+++ b/docs/docs/Components/bundles-chroma.mdx
@@ -39,7 +39,7 @@ The following example flow uses one **Chroma DB** component for both reads and w

-* When writing, it splits `Data` from a [**URL** component](/components-data#url) into chunks, computes embeddings with attached **Embedding Model** component, and then loads the chunks and embeddings into the Chroma vector store.
+* When writing, it splits `Data` from a [**URL** component](/url) into chunks, computes embeddings with attached **Embedding Model** component, and then loads the chunks and embeddings into the Chroma vector store.
To trigger writes, click **Run component** on the **Chroma DB** component.
* When reading, it uses chat input to perform a similarity search on the vector store, and then print the search results to the chat.
diff --git a/docs/docs/Components/bundles-cohere.mdx b/docs/docs/Components/bundles-cohere.mdx
index b01ee8248f4d..13f5cff22069 100644
--- a/docs/docs/Components/bundles-cohere.mdx
+++ b/docs/docs/Components/bundles-cohere.mdx
@@ -18,7 +18,7 @@ This component generates text using Cohere's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/_bundles-cometapi.mdx b/docs/docs/Components/bundles-cometapi.mdx
similarity index 91%
rename from docs/docs/Components/_bundles-cometapi.mdx
rename to docs/docs/Components/bundles-cometapi.mdx
index 64bf440dcb3a..e5dfb9014490 100644
--- a/docs/docs/Components/_bundles-cometapi.mdx
+++ b/docs/docs/Components/bundles-cometapi.mdx
@@ -35,12 +35,11 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
| input_value | String | Input parameter. The input text to send to the model. |
| system_message | String | Input parameter. A system message that helps set the behavior of the assistant. |
| max_tokens | Integer | Input parameter. The maximum number of tokens to generate. Set to 0 for unlimited tokens. |
-| temperature | Float | Input parameter. Controls randomness in the output. Range: [0.0, 1.0]. Default: 0.1. |
-| seed | Integer | Input parameter. The seed controls the reproducibility of the job (advanced). |
-| model_kwargs | Dict | Input parameter. Additional keyword arguments to pass to the model (advanced). |
-| json_mode | Boolean | Input parameter. If True, it will output JSON regardless of passing a schema (advanced). |
+| temperature | Float | Input parameter. Controls randomness in the output. Range: `[0.0, 2.0]`. Default: `0.7`. |
+| seed | Integer | Input parameter. The seed controls the reproducibility of the job. |
+| model_kwargs | Dict | Input parameter. Additional keyword arguments to pass to the model. |
+| json_mode | Boolean | Input parameter. If True, it will output JSON regardless of passing a schema. |
| stream | Boolean | Input parameter. Whether to stream the response. Default: false. |
-| output_parser | OutputParser | Input parameter. The parser to use to parse the output of the model (advanced). |
| model | LanguageModel | Output parameter. An instance of ChatOpenAI configured with CometAPI parameters. |
## Use CometAPI in a flow
diff --git a/docs/docs/Components/bundles-composio.mdx b/docs/docs/Components/bundles-composio.mdx
index 6406c95cb109..089f102c3b85 100644
--- a/docs/docs/Components/bundles-composio.mdx
+++ b/docs/docs/Components/bundles-composio.mdx
@@ -17,20 +17,74 @@ Composio components are primarily used as [tools for agents](/agents-tools).
The **Composio** bundle includes an aggregate **Composio Tools** component and the following single-service components:
+
+Composio single-service components
+
+- **AgentQL**
+- **Agiled**
+- **Airtable**
+- **Apollo**
+- **Asana**
+- **Attio**
+- **Bitbucket**
+- **Bolna**
+- **Brightdata**
+- **Calendly**
+- **Canva**
+- **Canvas**
+- **Coda**
+- **Contentful**
+- **Digicert**
+- **Discord**
- **Dropbox**
+- **ElevenLabs**
+- **Exa**
+- **Figma**
+- **Finage**
+- **Firecrawl**
+- **Fireflies**
+- **Fixer**
+- **Flexisign**
+- **Freshdesk**
- **GitHub**
- **Gmail**
+- **Google BigQuery**
- **Google Calendar**
+- **Google Classroom**
+- **Google Docs**
- **Google Meet**
+- **Google Sheets**
- **Google Tasks**
+- **Heygen**
+- **Instagram**
+- **Jira**
+- **Jotform**
+- **Klaviyo**
- **Linear**
+- **Listennotes**
+- **Mem0**
+- **Miro**
+- **Missive**
+- **Notion**
+- **OneDrive**
- **Outlook**
+- **Pandadoc**
+- **PeopleDataLabs**
+- **PerplexityAI**
- **Reddit**
-- **Slack** (your Slack account)
-- **Slackbot** (bot integration)
+- **SerpAPI**
+- **Slack**
+- **Slackbot**
+- **Snowflake**
- **Supabase**
+- **Tavily**
+- **TimelinesAI**
- **Todoist**
-- **Youtube**
+- **Wrike**
+- **YouTube**
+
+
+
The **Composio Tools** component is an access point for multiple Composio services (tools).
However, most of these services are also available as single-service components, which are recommended over the **Composio Tools** component.
diff --git a/docs/docs/Components/bundles-cuga.mdx b/docs/docs/Components/bundles-cuga.mdx
new file mode 100644
index 000000000000..f17c983803fc
--- /dev/null
+++ b/docs/docs/Components/bundles-cuga.mdx
@@ -0,0 +1,95 @@
+---
+title: CUGA
+slug: /bundles-cuga
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+ [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.
+
+:::warning Model Provider Limitations
+The **CUGA** component only supports **OpenAI** and **watsonx** models. To use other model providers, use the core [**Agent** component](/agents) instead.
+:::
+
+The **CUGA (ConfigUrable Generalist Agent)** component is an advanced AI agent that executes complex tasks using tools, optional browser automation, and structured output generation.
+
+The **CUGA** component can be used in flows in place of an [**Agent** component](/agents).
+Like the core **Agent** component, the **CUGA** component can use tools connected to its **Tools** port, and can be used as a tool itself.
+It also includes some additional features:
+
+* Browser automation for web scraping with [Playwright](https://playwright.dev/docs/intro).
+To enable web scraping, set the component's `browser_enabled` parameter to `true`, and specify a single URL in the `web_apps` parameter, in the format `https://example.com`.
+* Load custom instructions for the agent to execute.
+To use this feature, use the component's **Instructions** input to attach markdown files containing agent instructions.
+
+For more information, see the [CUGA project repository](https://github.com/cuga-project/cuga-agent).
+
+## Use the CUGA component in a flow
+
+For demonstration purposes, the following example modifies a template flow to use the **CUGA** component.
+
+
+
+1. Create a flow based on the **Simple Agent** template, and then replace the **Agent** component with the **CUGA** component.
+2. Connect an [**MCP Tools** component](/mcp-client) and a [**Calculator** component](/calculator) to the **CUGA** component's **Tools** port, and then connect the **MCP Tools** component to any MCP server.
+ This example connects to a server containing sales data for a business organization.
+3. Add a [**Read File** component](/read-file), and then connect it to the **CUGA** component's **Instructions** port.
+ Alternatively, click **Edit text** to open the **Edit text content** pane, and enter your instructions directly into the **CUGA** component.
+4. Create a Markdown file on your computer called `instructions.md`, and then insert the following content.
+ It's important to clearly format the document with `## Plan` and `## Answer` for the agent to understand your instructions.
+ ```markdown
+ ## Plan
+
+ - Break down complex queries into subtasks
+ - Prioritize information gathering before execution
+ - Consider dependencies between actions
+ - Validate intermediate results before proceeding
+
+ ## Answer
+
+ - Provide concise summaries with key findings
+ - Include relevant data points and metrics
+ - Cite sources when using MCP tool results
+ - Use clear structure and formatting for readability
+ ```
+5. In the **Read File** component, click **Add File**, and then attach your `instructions.md` file.
+
+6. Open the **Playground**, and then ask the agent a question that could use your connected MCP server.
+This example asked about the sales data provided by the MCP Server, such as `Which accounts are available?`.
+ The agent describes the tool calls it makes and then returns an answer according to the instructions.
+ For example, the list of available accounts is very large, but the **CUGA** component returns a concise summary as requested in the policy.
+ ```
+ Based on the available data, here are the accounts:
+
+ Summit Solutions (NY) - Revenue: $1,200,000
+ Pacific Ventures (CA) - Revenue: $9,500,000
+ Lone Star Corp (TX) - Revenue: $4,500,000
+ Mountain Peak Systems (CO) - Revenue: $2,100,000
+ Digital Dynamics (CA) - Revenue: $5,500,000
+ Cascade Computing (WA) - Revenue: $4,300,000
+ Data Flow Systems (CA) - Revenue: $8,900,000
+ Rocky Mountain Enterprises (CO) - Revenue: $3,200,000
+ Blue Sky Partners (TX) - Revenue: $500,000
+ Liberty Manufacturing (PA) - Revenue: $3,400,000
+
+ This is a partial list; there are more accounts available. The total revenue across all accounts is $210,200,000.
+ ```
+
+## CUGA parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| agent_llm | Dropdown | Model provider for the agent.|
+| instructions | Multiline String | Custom instructions that define the agent's planning and answers. Can be provided directly or through Markdown files. Formatting is important in order for the agent to understand the instructions. See [Use the CUGA component in a flow](#use-the-cuga-component-in-a-flow). |
+| n_messages | Integer | Number of chat history messages to retrieve. Useful for maintaining context in ongoing conversations identified by `session_id`. Default: `100`. |
+| format_instructions | Multiline String | Template for structured output. |
+| output_schema | Table | When `output_schema` is provided, structured responses are validated against a dynamically built schema. Invalid items are returned alongside validation errors. Fields: `name`, `description`, `type` (`str`, `int`, `float`, `bool`, `dict`), `multiple` (as list).|
+| add_current_date_tool | Boolean | If true, adds a tool that returns the current date. Default: `true`. |
+| lite_mode | Boolean | Set to `true` to enable CugaLite mode for faster execution when using a smaller number of tools. Default: `true`. |
+| lite_mode_tool_threshold | Integer | The threshold to automatically enable CugaLite. If the CUGA component has fewer tools connected than this threshold, CugaLite is activated. Default: `25`. |
+| decomposition_strategy | Dropdown | Strategy for task decomposition. `flexible` allows multiple subtasks per app. `exact` enforces one subtask per app. Default: `flexible`. |
+| browser_enabled | Boolean | Enable a built-in browser for web scraping and search. Allows the agent to use general web search in its responses. Disable (`false`) to restrict the agent to the context provided in the flow. Default: `false`. |
+| web_apps | Multiline String | When `browser_enabled` is `true`, specify a single URL such as `https://example.com` that the agent can open with the built-in browser. The CUGA component can access both public and private internet resources. There is no built-in mechanism in the CUGA component to restrict access to only public internet resources. |
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-datastax.mdx b/docs/docs/Components/bundles-datastax.mdx
index 357e7cc1cae9..1534750fef56 100644
--- a/docs/docs/Components/bundles-datastax.mdx
+++ b/docs/docs/Components/bundles-datastax.mdx
@@ -194,136 +194,11 @@ The output is a list of [`Data`](/data-types#data) objects containing the query
| Static Filters | Dict | Input parameter. Attribute-value pairs used to filter query results. |
| Limit | String | Input parameter. The number of records to return. |
-## Astra DB Tool
-
-The **Astra DB Tool** component enables searching data in Astra DB collections, including hybrid search, vector search, and regular filter-based search.
-Specialized searches require that the collection is pre-configured with the required parameters.
-
-Outputs a list of [`Data`](/data-types#data) objects containing the query results from Astra DB. Each `Data` object contains the document fields specified by the projection attributes. Limited by the `number_of_results` parameter and the upper limit of the Astra DB Data API, depending on the type of search.
-
-You can use the component to execute queries directly as isolated steps in a flow, or you can connect it as a [tool for an agent](/agents-tools) to allow the agent to query data from Astra DB collections as needed to respond to user queries.
-For more information, see [Use Langflow agents](/agents).
-
-
-
-### Astra DB Tool parameters
-
-The following parameters are for the **Astra DB Tool** component overall.
-
-The values for **Collection Name**, **Astra DB Application Token**, and **Astra DB API Endpoint** are found in your Astra DB deployment. For more information, see the [Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/databases/create-database.html).
-
-| Name | Type | Description |
-|-------------------|--------|--------|
-| Tool Name | String | Input parameter. The name used to reference the tool in the agent's prompt. |
-| Tool Description | String | Input parameter. A brief description of the tool. This helps the model decide when to use it. |
-| Keyspace Name | String | Input parameter. The name of the keyspace in Astra DB. Default: `default_keyspace` |
-| Collection Name | String | Input parameter. The name of the Astra DB collection to query. |
-| Token | SecretString | Input parameter. The authentication token for accessing Astra DB. |
-| API Endpoint | String | Input parameter. The Astra DB API endpoint. |
-| Projection Fields | String | Input parameter. Comma-separated list of attributes to return from matching documents. The default is the default projection, `*`, which returns all attributes except reserved fields like `$vector`. |
-| Tool Parameters | Dict | Input parameter. [Astra DB Data API `find` filters](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-many.html#parameters) that become tools for an agent. These Filters _may_ be used in a search, if the agent selects them. See [Define tool-specific parameters](#define-tool-specific-parameters). |
-| Static Filters | Dict | Input parameter. Attribute-value pairs used to filter query results. Equivalent to [Astra DB Data API `find` filters](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-many.html#parameters). **Static Filters** are included with _every_ query. Use **Static Filters** without semantic search to perform a regular filter search. |
-| Number of Results | Int | Input parameter. The maximum number of documents to return. |
-| Semantic Search | Boolean | Input parameter. Whether to run a similarity search by generating a vector embedding from the chat input and following the **Semantic Search Instruction**. Default: `false`. If `true`, you must attach an [embedding model component](/components-embedding-models) or have vectorize pre-enabled on your collection. |
-| Use Astra DB Vectorize | Boolean | Input parameter. Whether to use the Astra DB vectorize feature for embedding generation when running a semantic search. Default: `false`. If `true`, you must have vectorize pre-enabled on your collection. |
-| Embedding Model | Embedding | Input parameter. A port to attach an embedding model component to generate a vector from input text for semantic search. This can be used when **Semantic Search** is `true`, with or without vectorize. Be sure to use a model that aligns with the dimensions of the embeddings already present in the collection. |
-| Semantic Search Instruction | String | Input parameter. The query to use for similarity search. Default: `"Find documents similar to the query."`. This instruction is used to guide the model in performing semantic search. |
-
-### Define tool-specific parameters
-
-:::tip
-**Tool Parameters** are small functions that you create within the **Astra DB Tool** component.
-They give the LLM pre-defined ways to interact with the data in your collection.
-
-Without these filters, the LLM has no concept of the data in your collection or which attributes are important.
-
-At runtime, the LLM can decide which filters are relevant to the current query.
-
-Filters in **Tool Parameters** aren't always applied.
-If you want to enforce filters for _every_ query, use the **Static Filters** parameter.
-You can use both **Tool Parameters** and **Static Filters** to set some required filters and some optional filters.
-:::
-
-In the **Astra DB Tool** component's **Tool Parameters** field, you can create filters to query documents in your collection.
-
-When used in **Tool Mode** with an agent, these filters tell the agent which document attributes are most important, which are required in searches, and which operators to use on certain attributes.
-The filters become available as parameters that the LLM can use when calling the tool, with a better understanding of each parameter provided by the **Description** field.
-
-In the **Tool Parameters** pane, click **Add a new row**, and then edit each cell in the row.
-For example, the following filter allows an LLM to filter by unique `customer_id` values:
-
- * Name: `customer_id`
- * Attribute Name: Leave empty if the attribute matches the field name in the database.
- * Description: `"The unique identifier of the customer to filter by"`.
- * Is Metadata: Select **False** unless the value is stored in the metadata field.
- * Is Mandatory: Set to **True** to make the filter required.
- * Is Timestamp: For this example, select **False** because the value is an ID, not a timestamp.
- * Operator: `$eq` to look for an exact match.
-
-The following fields are available for each row in the **Tool Parameters** pane:
-
-| Parameter | Description |
-|-----------|-------------|
-| Name | The name of the parameter that is exposed to the LLM. It can be the same as the underlying field name or a more descriptive label. The LLM uses this name, along with the description, to infer what value to provide during execution. |
-| Attribute Name | When the parameter name shown to the LLM differs from the actual field or property in the database, use this setting to map the user-facing name to the correct attribute. For example, to apply a range filter to the timestamp field, define two separate parameters, such as `start_date` and `end_date`, that both reference the same timestamp attribute. |
-| Description | Provides instructions to the LLM on how the parameter should be used. Clear and specific guidance helps the LLM provide valid input. For example, if a field such as `specialty` is stored in lowercase, the description should indicate that the input must be lowercase. |
-| Is Metadata | When loading data using LangChain or Langflow, additional attributes may be stored under a metadata object. If the target attribute is stored this way, enable this option. It adjusts the query by generating a filter in the format: `{"metadata.": ""}` |
-| Is Timestamp | For date or time-based filters, enable this option to automatically convert values to the timestamp format that the Astrapy client expects. This ensures compatibility with the underlying API without requiring manual formatting. |
-| Operator | Defines the filtering logic applied to the attribute. You can use any valid [Data API filter operator](https://docs.datastax.com/en/astra-db-serverless/api-reference/filter-operator-collections.html). For example, to filter a time range on the timestamp attribute, use two parameters: one with the `$gt` operator for "greater than", and another with the `$lt` operator for "less than". |
-
-## Astra DB Graph
-
-The **Astra DB Graph** component uses `AstraDBGraphVectorStore`, an instance of [LangChain graph vector store](https://python.langchain.com/api_reference/community/graph_vectorstores.html), for graph traversal and graph-based document retrieval in an Astra DB collection. It also supports writing to the vector store.
-For more information, see [Build a Graph RAG system with LangChain and GraphRetriever](https://docs.datastax.com/en/astra-db-serverless/tutorials/graph-rag.html).
-
-
-
-### Astra DB Graph parameters
-
-You can inspect a vector store component's parameters to learn more about the inputs it accepts, the features it supports, and how to configure it.
-
-
-
-
-
-For information about accepted values and functionality, see the [Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/index.html) or inspect [component code](/concepts-components#component-code).
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| token | Astra DB Application Token | Input parameter. An Astra application token with permission to access your vector database. Once the connection is verified, additional fields are populated with your existing databases and collections. If you want to create a database through this component, the application token must have Organization Administrator permissions. |
-| api_endpoint | API Endpoint | Input parameter. Your database's API endpoint. |
-| keyspace | Keyspace | Input parameter. The keyspace in your database that contains the collection specified in `collection_name`. Default: `default_keyspace`. |
-| collection_name | Collection | Input parameter. The name of the collection that you want to use with this flow. For write operations, if a matching collection doesn't exist, a new one is created. |
-| metadata_incoming_links_key | Metadata Incoming Links Key | Input parameter. The metadata key for the incoming links in the vector store. |
-| ingest_data | Ingest Data | Input parameter. Records to load into the vector store. Only relevant for writes. |
-| search_input | Search Query | Input parameter. Query string for similarity search. Only relevant for reads. |
-| cache_vector_store | Cache Vector Store | Input parameter. Whether to cache the vector store in Langflow memory for faster reads. Default: Enabled (`true`). |
-| embedding_model | Embedding Model | Input parameter. Attach an [embedding model component](/components-embedding-models) to generate embeddings. If the collection has a [vectorize integration](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html), don't attach an embedding model component. |
-| metric | Metric | Input parameter. The metrics to use for similarity search calculations, either `cosine` (default), `dot_product`, or `euclidean`. This is a collection setting. |
-| batch_size | Batch Size | Input parameter. Optional number of records to process in a single batch. |
-| bulk_insert_batch_concurrency | Bulk Insert Batch Concurrency | Input parameter. Optional concurrency level for bulk write operations. |
-| bulk_insert_overwrite_concurrency | Bulk Insert Overwrite Concurrency | Input parameter. Optional concurrency level for bulk write operations that allow upserts (overwriting existing records). |
-| bulk_delete_concurrency | Bulk Delete Concurrency | Input parameter. Optional concurrency level for bulk delete operations. |
-| setup_mode | Setup Mode | Input parameter. Configuration mode for setting up the vector store, either `Sync` (default) or `Off`. |
-| pre_delete_collection | Pre Delete Collection | Input parameter. Whether to delete the collection before creating a new one. Default: Disabled (`false`). |
-| metadata_indexing_include | Metadata Indexing Include | Input parameter. A list of metadata fields to index if you want to enable [selective indexing](https://docs.datastax.com/en/astra-db-serverless/api-reference/collection-indexes.html) *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). |
-| metadata_indexing_exclude | Metadata Indexing Exclude | Input parameter. A list of metadata fields to exclude from indexing if you want to enable selective indexing *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). |
-| collection_indexing_policy | Collection Indexing Policy | Input parameter. A dictionary to define the indexing policy if you want to enable selective indexing *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). The `collection_indexing_policy` dictionary is used when you need to set indexing on subfields or a complex indexing definition that isn't compatible as a list. |
-| number_of_results | Number of Results | Input parameter. Number of search results to return. Default: 4. Only relevant to reads. |
-| search_type | Search Type | Input parameter. Search type to use, either `Similarity`, `Similarity with score threshold`, or `MMR (Max Marginal Relevance)`, `Graph Traversal`, or `MMR (Max Marginal Relevance) Graph Traversal` (default). Only relevant to reads. |
-| search_score_threshold | Search Score Threshold | Input parameter. Minimum similarity score threshold for search results if the `search_type` is `Similarity with score threshold`. Default: 0. |
-| search_filter | Search Metadata Filter | Input parameter. Optional dictionary of metadata filters to apply in addition to vector search. |
-
## Graph RAG
The **Graph RAG** component uses an instance of [`GraphRetriever`](https://datastax.github.io/graph-rag/reference/langchain_graph_retriever/) for Graph RAG traversal enabling graph-based document retrieval in an Astra DB vector store.
For more information, see the [DataStax Graph RAG documentation](https://datastax.github.io/graph-rag/).
-:::info
-This component can be a Graph RAG extension for the [**Astra DB** vector store component](#astra-db).
-However, the [**Astra DB Graph** component](#astra-db-graph) includes both the vector store connection and Graph RAG functionality.
-:::
-
### Graph RAG parameters
You can inspect a vector store component's parameters to learn more about the inputs it accepts, the features it supports, and how to configure it.
@@ -375,7 +250,7 @@ You can inspect a vector store component's parameters to learn more about the in
| password | HCD Password | Input parameter. Password for authenticating to your HCD deployment. Required. |
| api_endpoint | HCD API Endpoint | Input parameter. Your deployment's HCD Data API endpoint, formatted as `http[s]://CLUSTER_HOST:GATEWAY_PORT` where `CLUSTER_HOST` is the IP address of any node in your cluster and `GATEWAY_PORT` is the port number for your API gateway service. For example, `http://192.0.2.250:8181`. Required. |
| ingest_data | Ingest Data | Input parameter. Records to load into the vector store. Only relevant for writes. |
-| search_input | Search Input | Input parameter. Query string for similarity search. Only relevant for reads. |
+| search_input | Search Input | Input parameter. Query string for similarity search. Only relevant to reads. |
| namespace | Namespace | Input parameter. The namespace in HCD that contains or will contain the collection specified in `collection_name`. Default: `default_namespace`. |
| ca_certificate | CA Certificate | Input parameter. Optional CA certificate for TLS connections to HCD. |
| metric | Metric | Input parameter. The metrics to use for similarity search calculations, either `cosine`, `dot_product`, or `euclidean`. This is a collection setting. If calling an existing collection, leave unset to use the collection's metric. If a write operation creates a new collection, specify the desired similarity metric setting. |
@@ -413,7 +288,7 @@ Your agentic flows don't need an external database to store chat memory.
For more information, see [Memory management options](/memory).
:::
-For more information about using external chat memory in flows, see the [**Message History** component](/components-helpers#message-history).
+For more information about using external chat memory in flows, see the [**Message History** component](/message-history).
#### Astra DB Chat Memory parameters
@@ -427,9 +302,99 @@ For more information about using external chat memory in flows, see the [**Messa
| namespace | String | Input parameter. The optional namespace within Astra DB for the collection. |
| session_id | MessageText | Input parameter. The unique identifier for the chat session. Uses the current session ID if not provided. |
-### Assistants API
-The following DataStax components are used to create and manage Assistants API functions in a flow:
+## Legacy DataStax components
+
+import PartialLegacy from '@site/docs/_partial-legacy.mdx';
+
+
+
+The following DataStax components are in legacy status:
+
+
+Astra DB Tool
+
+Replace the **Astra DB Tool** component with the [**Astra DB** component](#astra-db).
+
+The **Astra DB Tool** component enables searching data in Astra DB collections, including hybrid search, vector search, and regular filter-based search.
+Specialized searches require that the collection is pre-configured with the required parameters.
+
+Outputs a list of [`Data`](/data-types#data) objects containing the query results from Astra DB. Each `Data` object contains the document fields specified by the projection attributes. Limited by the `number_of_results` parameter and the upper limit of the Astra DB Data API, depending on the type of search.
+
+You can use the component to execute queries directly as isolated steps in a flow, or you can connect it as a [tool for an agent](/agents-tools) to allow the agent to query data from Astra DB collections as needed to respond to user queries.
+
+
+The values for **Collection Name**, **Astra DB Application Token**, and **Astra DB API Endpoint** are found in your Astra DB deployment. For more information, see the [Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/databases/create-database.html).
+
+| Name | Type | Description |
+|-------------------|--------|--------|
+| Tool Name | String | Input parameter. The name used to reference the tool in the agent's prompt. |
+| Tool Description | String | Input parameter. A brief description of the tool. This helps the model decide when to use it. |
+| Keyspace Name | String | Input parameter. The name of the keyspace in Astra DB. Default: `default_keyspace` |
+| Collection Name | String | Input parameter. The name of the Astra DB collection to query. |
+| Token | SecretString | Input parameter. The authentication token for accessing Astra DB. |
+| API Endpoint | String | Input parameter. The Astra DB API endpoint. |
+| Projection Fields | String | Input parameter. Comma-separated list of attributes to return from matching documents. The default is the default projection, `*`, which returns all attributes except reserved fields like `$vector`. |
+| Tool Parameters | Dict | Input parameter. [Astra DB Data API `find` filters](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-many.html#parameters) that become tools for an agent. These Filters _may_ be used in a search, if the agent selects them. |
+| Static Filters | Dict | Input parameter. Attribute-value pairs used to filter query results. Equivalent to [Astra DB Data API `find` filters](https://docs.datastax.com/en/astra-db-serverless/api-reference/document-methods/find-many.html#parameters). **Static Filters** are included with _every_ query. Use **Static Filters** without semantic search to perform a regular filter search. |
+| Number of Results | Int | Input parameter. The maximum number of documents to return. |
+| Semantic Search | Boolean | Input parameter. Whether to run a similarity search by generating a vector embedding from the chat input and following the **Semantic Search Instruction**. Default: `false`. If `true`, you must attach an [embedding model component](/components-embedding-models) or have vectorize pre-enabled on your collection. |
+| Use Astra DB Vectorize | Boolean | Input parameter. Whether to use the Astra DB vectorize feature for embedding generation when running a semantic search. Default: `false`. If `true`, you must have vectorize pre-enabled on your collection. |
+| Embedding Model | Embedding | Input parameter. A port to attach an embedding model component to generate a vector from input text for semantic search. This can be used when **Semantic Search** is `true`, with or without vectorize. Be sure to use a model that aligns with the dimensions of the embeddings already present in the collection. |
+| Semantic Search Instruction | String | Input parameter. The query to use for similarity search. Default: `"Find documents similar to the query."`. This instruction is used to guide the model in performing semantic search. |
+
+
+
+
+
+Astra DB Graph
+
+Replace the **Astra DB Graph** component with the [**Graph RAG** component](#graph-rag).
+
+The **Astra DB Graph** component uses `AstraDBGraphVectorStore`, an instance of [LangChain graph vector store](https://python.langchain.com/api_reference/community/graph_vectorstores.html), for graph traversal and graph-based document retrieval in an Astra DB collection. It also supports writing to the vector store.
+For more information, see [Build a Graph RAG system with LangChain and GraphRetriever](https://docs.datastax.com/en/astra-db-serverless/tutorials/graph-rag.html).
+
+
+You can inspect a vector store component's parameters to learn more about the inputs it accepts, the features it supports, and how to configure it.
+
+
+
+
+
+For information about accepted values and functionality, see the [Astra DB Serverless documentation](https://docs.datastax.com/en/astra-db-serverless/index.html) or inspect [component code](/concepts-components#component-code).
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| token | Astra DB Application Token | Input parameter. An Astra application token with permission to access your vector database. Once the connection is verified, additional fields are populated with your existing databases and collections. If you want to create a database through this component, the application token must have Organization Administrator permissions. |
+| api_endpoint | API Endpoint | Input parameter. Your database's API endpoint. |
+| keyspace | Keyspace | Input parameter. The keyspace in your database that contains the collection specified in `collection_name`. Default: `default_keyspace`. |
+| collection_name | Collection | Input parameter. The name of the collection that you want to use with this flow. For write operations, if a matching collection doesn't exist, a new one is created. |
+| metadata_incoming_links_key | Metadata Incoming Links Key | Input parameter. The metadata key for the incoming links in the vector store. |
+| ingest_data | Ingest Data | Input parameter. Records to load into the vector store. Only relevant for writes. |
+| search_input | Search Query | Input parameter. Query string for similarity search. Only relevant to reads. |
+| cache_vector_store | Cache Vector Store | Input parameter. Whether to cache the vector store in Langflow memory for faster reads. Default: Enabled (`true`). |
+| embedding_model | Embedding Model | Input parameter. Attach an [embedding model component](/components-embedding-models) to generate embeddings. If the collection has a [vectorize integration](https://docs.datastax.com/en/astra-db-serverless/databases/embedding-generation.html), don't attach an embedding model component. |
+| metric | Metric | Input parameter. The metrics to use for similarity search calculations, either `cosine` (default), `dot_product`, or `euclidean`. This is a collection setting. |
+| batch_size | Batch Size | Input parameter. Optional number of records to process in a single batch. |
+| bulk_insert_batch_concurrency | Bulk Insert Batch Concurrency | Input parameter. Optional concurrency level for bulk write operations. |
+| bulk_insert_overwrite_concurrency | Bulk Insert Overwrite Concurrency | Input parameter. Optional concurrency level for bulk write operations that allow upserts (overwriting existing records). |
+| bulk_delete_concurrency | Bulk Delete Concurrency | Input parameter. Optional concurrency level for bulk delete operations. |
+| setup_mode | Setup Mode | Input parameter. Configuration mode for setting up the vector store, either `Sync` (default) or `Off`. |
+| pre_delete_collection | Pre Delete Collection | Input parameter. Whether to delete the collection before creating a new one. Default: Disabled (`false`). |
+| metadata_indexing_include | Metadata Indexing Include | Input parameter. A list of metadata fields to index if you want to enable [selective indexing](https://docs.datastax.com/en/astra-db-serverless/api-reference/collection-indexes.html) *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). |
+| metadata_indexing_exclude | Metadata Indexing Exclude | Input parameter. A list of metadata fields to exclude from indexing if you want to enable selective indexing *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). |
+| collection_indexing_policy | Collection Indexing Policy | Input parameter. A dictionary to define the indexing policy if you want to enable selective indexing *only* when creating a collection. Doesn't apply to existing collections. Only one `*_indexing_*` parameter can be set per collection. If all `*_indexing_*` parameters are unset, then all fields are indexed (default indexing). The `collection_indexing_policy` dictionary is used when you need to set indexing on subfields or a complex indexing definition that isn't compatible as a list. |
+| number_of_results | Number of Results | Input parameter. Number of search results to return. Default: 4. Only relevant to reads. |
+| search_type | Search Type | Input parameter. Search type to use, either `Similarity`, `Similarity with score threshold`, or `MMR (Max Marginal Relevance)`, `Graph Traversal`, or `MMR (Max Marginal Relevance) Graph Traversal` (default). Only relevant to reads. |
+| search_score_threshold | Search Score Threshold | Input parameter. Minimum similarity score threshold for search results if the `search_type` is `Similarity with score threshold`. Default: 0. |
+| search_filter | Search Metadata Filter | Input parameter. Optional dictionary of metadata filters to apply in addition to vector search. |
+
+
+
+
+Assistants API components
+
+The following DataStax components were used to create and manage Assistants API functions in a flow:
* **Astra Assistant Agent**
* **Create Assistant**
@@ -438,20 +403,21 @@ The following DataStax components are used to create and manage Assistants API f
* **List Assistants**
* **Run Assistant**
-## Environment variables
+These components are legacy and should be replaced with Langflow's native agent components.
-The following DataStax components are used to load and retrieve environment variables in a flow:
+
-* **Dotenv**
-* **Get Environment Variable**
+
+Environment variable components
-## Legacy DataStax components
+The following DataStax components were used to load and retrieve environment variables in a flow:
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
+* **Dotenv**: Loads environment variables from a `.env` file
+* **Get Environment Variable**: Retrieves the value of an environment variable
-
+These components are legacy. Use Langflow's built-in environment variable support or global variables instead.
-The following DataStax components are in legacy status:
+
Astra Vectorize
diff --git a/docs/docs/Components/bundles-deepseek.mdx b/docs/docs/Components/bundles-deepseek.mdx
index 6922d4a0c965..73d9fd33c280 100644
--- a/docs/docs/Components/bundles-deepseek.mdx
+++ b/docs/docs/Components/bundles-deepseek.mdx
@@ -18,7 +18,7 @@ The **DeepSeek** component generates text using DeepSeek's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-docling.mdx b/docs/docs/Components/bundles-docling.mdx
index 7c635ed936d9..67842e73f930 100644
--- a/docs/docs/Components/bundles-docling.mdx
+++ b/docs/docs/Components/bundles-docling.mdx
@@ -7,8 +7,11 @@ import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Icon from "@site/src/components/icon";
import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+import PartialDockerDoclingDeps from '@site/docs/_partial-docker-docling-deps.mdx';
-Langflow integrates with [Docling](https://docling-project.github.io/docling/) through a bundle of components for parsing documents.
+ [**Bundles**](/components-bundle-components) contain custom components that support specific third-party integrations with Langflow.
+
+Langflow integrates with [Docling](https://docling-project.github.io/docling/) through a bundle of components for parsing and chunking documents.
## Prerequisites
@@ -28,6 +31,8 @@ The Docling dependency is required to use the Docling components in Langflow.
For Langflow Desktop, add the Docling dependency to Langflow Desktop's `requirements.txt`.
For more information, see [Install custom dependencies](/install-custom-dependencies).
+
+
## Use Docling components in a flow
:::tip
@@ -36,7 +41,7 @@ To learn more about content extraction with Docling, see the video tutorial [Doc
This example demonstrates how to use Docling components to split a PDF in a flow:
-1. Connect a **Docling** and an **Export DoclingDocument** component to a [**Split Text** component](/components-processing#split-text).
+1. Connect a **Docling** and an **Export DoclingDocument** component to a [**Split Text** component](/split-text).
The **Docling** component loads the document, and the **Export DoclingDocument** component converts the `DoclingDocument` into the format you select. This example converts the document to Markdown, with images represented as placeholders.
The **Split Text** component will split the Markdown into chunks for the vector database to store in the next part of the flow.
@@ -56,9 +61,9 @@ This example demonstrates how to use Docling components to split a PDF in a flow
The following sections describe the purpose and configuration options for each component in the **Docling** bundle.
-### Docling language model
+### Docling local model
-The **Docling** language model component ingest documents, and then uses Docling to process them by running the Docling models locally.
+The **Docling** component ingests documents, and then uses Docling to process them by running a local Docling model.
It outputs `files`, which is the processed files with `DoclingDocument` data.
@@ -74,7 +79,7 @@ For more information, see the [Docling IBM models project repository](https://gi
### Docling Serve
-The **Docling Serve** component runs Docling as an API service.
+The **Docling Serve** component ingests documents and processes them with a Docling API service rather than a local model.
It outputs `files`, which is the processed files with `DoclingDocument` data.
@@ -93,7 +98,7 @@ For more information, see the [Docling serve project repository](https://github.
### Chunk DoclingDocument
-The **Chunk DoclingDocument** component uses the `DoclingDocument` chunkers to split a document into chunks.
+The **Chunk DoclingDocument** component splits `DoclingDocument` objects into chunks.
It outputs the chunked documents as a [`DataFrame`](/data-types#dataframe).
@@ -132,4 +137,4 @@ For more information, see the [Docling core project repository](https://github.c
## See also
-* [**File** component](/components-data#file)
+* [**Read File** component](/read-file)
diff --git a/docs/docs/Components/bundles-duckduckgo.mdx b/docs/docs/Components/bundles-duckduckgo.mdx
index b3b3a82719c7..53cdd9022d0f 100644
--- a/docs/docs/Components/bundles-duckduckgo.mdx
+++ b/docs/docs/Components/bundles-duckduckgo.mdx
@@ -28,5 +28,5 @@ It outputs a list of search results as a [`DataFrame`](/data-types#dataframe) wi
## See also
-* [**Web Search** component](/components-data#web-search)
+* [**Web Search** component](/web-search)
* [**SearchApi** bundle](/bundles-searchapi)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-glean.mdx b/docs/docs/Components/bundles-glean.mdx
index a5fe2879a56c..8e1f66f06135 100644
--- a/docs/docs/Components/bundles-glean.mdx
+++ b/docs/docs/Components/bundles-glean.mdx
@@ -30,4 +30,4 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
## See also
-* [**Web Search** component](/components-data#web-search)
\ No newline at end of file
+* [**Web Search** component](/web-search)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-google.mdx b/docs/docs/Components/bundles-google.mdx
index 1f1e60163bb3..460458d69507 100644
--- a/docs/docs/Components/bundles-google.mdx
+++ b/docs/docs/Components/bundles-google.mdx
@@ -144,8 +144,7 @@ Langflow includes multiple components that support Google Search, such as the fo
* [**Apify Actors** component](/bundles-apify)
* [**SearchApi** component](/bundles-searchapi)
* [**Serper Google Search API** component](/bundles-serper)
-* [**News Search** component](/components-data#news-search)
-* [**Web Search** component](/components-data#web-search)
+* [**Web Search** component](/web-search)
## Google Vertex AI
@@ -182,7 +181,7 @@ As an alternative, you can use [Composio components](/bundles-composio) to conne
This component loads documents from Google Drive using [Service Account JSON](https://developers.google.com/identity/protocols/oauth2/service-account) credentials and document ID filters.
-While there is no direct replacement, consider using the [**API Request** component](/components-data#api-request) to call the Google Drive API.
+While there is no direct replacement, consider using the [**API Request** component](/api-request) to call the Google Drive API.
@@ -191,7 +190,7 @@ While there is no direct replacement, consider using the [**API Request** compon
This component searches Google Drive using [Service Account JSON](https://developers.google.com/identity/protocols/oauth2/service-account) credentials and various query strings and filters.
-While there is no direct replacement, consider using the [**API Request** component](/components-data#api-request) to call the Google Drive API.
+While there is no direct replacement, consider using the [**API Request** component](/api-request) to call the Google Drive API.
diff --git a/docs/docs/Components/bundles-groq.mdx b/docs/docs/Components/bundles-groq.mdx
index f82e590620ee..28a6fbdf2449 100644
--- a/docs/docs/Components/bundles-groq.mdx
+++ b/docs/docs/Components/bundles-groq.mdx
@@ -18,7 +18,7 @@ This component generates text using Groq's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatGroq`](https://docs.langchain.com/oss/python/integrations/chat/groq) configured according to the component's parameters.
-Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-huggingface.mdx b/docs/docs/Components/bundles-huggingface.mdx
index e038bc440f9f..1a97f4637fb3 100644
--- a/docs/docs/Components/bundles-huggingface.mdx
+++ b/docs/docs/Components/bundles-huggingface.mdx
@@ -20,7 +20,7 @@ Authentication is required.
This component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatHuggingFace`](https://docs.langchain.com/oss/python/integrations/chat/huggingface) configured according to the component's parameters.
-Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-ibm.mdx b/docs/docs/Components/bundles-ibm.mdx
index 2042b31cc35c..28957d93f427 100644
--- a/docs/docs/Components/bundles-ibm.mdx
+++ b/docs/docs/Components/bundles-ibm.mdx
@@ -45,7 +45,7 @@ You can use the **IBM watsonx.ai** component anywhere you need a language model
The **IBM watsonx.ai** component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
The `LanguageModel` output from the **IBM watsonx.ai** component is an instance of `[ChatWatsonx](https://docs.langchain.com/oss/python/integrations/chat/ibm_watsonx)` configured according to the [component's parameters](#ibm-watsonxai-parameters).
diff --git a/docs/docs/Components/bundles-langchain.mdx b/docs/docs/Components/bundles-langchain.mdx
index 588609abbfc3..596c5e3dee28 100644
--- a/docs/docs/Components/bundles-langchain.mdx
+++ b/docs/docs/Components/bundles-langchain.mdx
@@ -106,7 +106,7 @@ For more information, see the [LangChain SQL agent documentation](https://docs.l
The LangChain **SQL Database** component establishes a connection to an SQL database.
-This component is different from the [**SQL Database** core component](/components-data#sql-database), which executes SQL queries on SQLAlchemy-compatible databases.
+This component is different from the [**SQL Database** core component](/sql-database), which executes SQL queries on SQLAlchemy-compatible databases.
## Text Splitters
@@ -183,4 +183,4 @@ The following LangChain components are in legacy status:
* **Vector Store Info/Agent**
* **VectorStoreRouterAgent**
-To replace these components, consider other components in the **LangChain** bundle or general Langflow components, such as the [**Agent** component](/components-agents) or the [**SQL Database** component](/components-data#sql-database).
\ No newline at end of file
+To replace these components, consider other components in the **LangChain** bundle or general Langflow components, such as the [**Agent** component](/components-agents) or the [**SQL Database** component](/sql-database).
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-lmstudio.mdx b/docs/docs/Components/bundles-lmstudio.mdx
index a027f4be5f48..0e5fe18c99cf 100644
--- a/docs/docs/Components/bundles-lmstudio.mdx
+++ b/docs/docs/Components/bundles-lmstudio.mdx
@@ -17,7 +17,7 @@ The **LM Studio** component generates text using LM Studio's local language mode
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-maritalk.mdx b/docs/docs/Components/bundles-maritalk.mdx
index f4d60ec4cbcb..e01499dc1912 100644
--- a/docs/docs/Components/bundles-maritalk.mdx
+++ b/docs/docs/Components/bundles-maritalk.mdx
@@ -18,7 +18,7 @@ The **MariTalk** component generates text using MariTalk LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-mem0.mdx b/docs/docs/Components/bundles-mem0.mdx
index 7ed3b994594f..3e946b0771d8 100644
--- a/docs/docs/Components/bundles-mem0.mdx
+++ b/docs/docs/Components/bundles-mem0.mdx
@@ -34,6 +34,6 @@ The **Mem0 Chat Memory** component retrieves and stores chat messages using Mem0
The **Mem0 Chat Memory** component can output either **Mem0 Memory** ([`Memory`](/data-types#memory)) or **Search Results** ([`Data`](/data-types#data)).
You can select the output type near the component's output port.
-Use **Mem0 Chat Memory** for memory storage and retrieval operations with the [**Message History** component](/components-helpers#message-history).
+Use **Mem0 Chat Memory** for memory storage and retrieval operations with the [**Message History** component](/message-history).
Use the **Search Results** output to retrieve specific memories based on a search query.
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-mistralai.mdx b/docs/docs/Components/bundles-mistralai.mdx
index b6fcd9365c76..2a829649c9b8 100644
--- a/docs/docs/Components/bundles-mistralai.mdx
+++ b/docs/docs/Components/bundles-mistralai.mdx
@@ -18,7 +18,7 @@ The **MistralAI** component generates text using MistralAI LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-novita.mdx b/docs/docs/Components/bundles-novita.mdx
index fcd43a40877c..8ffe5e79e428 100644
--- a/docs/docs/Components/bundles-novita.mdx
+++ b/docs/docs/Components/bundles-novita.mdx
@@ -16,7 +16,7 @@ This component generates text using [Novita's language models](https://novita.ai
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-nvidia.mdx b/docs/docs/Components/bundles-nvidia.mdx
index 54a4b037e501..682f2af25c2e 100644
--- a/docs/docs/Components/bundles-nvidia.mdx
+++ b/docs/docs/Components/bundles-nvidia.mdx
@@ -71,7 +71,7 @@ For more information about using embedding model components in flows, see [Embed
:::tip Tokenization considerations
Be aware of your embedding model's chunk size limit.
Tokenization errors can occur if your text chunks are too large.
-For more information, see [Tokenization errors due to chunk size](/components-processing#chunk-size).
+For more information, see [Tokenization errors due to chunk size](/split-text#chunk-size).
:::
## NVIDIA Rerank
@@ -153,7 +153,7 @@ For more information, see the [NV-Ingest documentation](https://nvidia.github.io
| extract_infographics | Extract Infographics | Extract infographics from document. Default: `false`. |
| text_depth | Text Depth | The level at which text is extracted. Options: 'document', 'page', 'block', 'line', 'span'. Default: `page`. |
| split_text | Split Text | Split text into smaller chunks. Default: `true`. |
-| chunk_size | Chunk Size | The number of tokens per chunk. Default: `500`. Make sure the chunk size is compatible with your embedding model. For more information, see [Tokenization errors due to chunk size](/components-processing#chunk-size). |
+| chunk_size | Chunk Size | The number of tokens per chunk. Default: `500`. Make sure the chunk size is compatible with your embedding model. For more information, see [Tokenization errors due to chunk size](/split-text#chunk-size). |
| chunk_overlap | Chunk Overlap | Number of tokens to overlap from previous chunk. Default: `150`. |
| filter_images | Filter Images | Filter images (see advanced options for filtering criteria). Default: `false`. |
| min_image_size | Minimum Image Size Filter | Minimum image width/length in pixels. Default: `128`. |
diff --git a/docs/docs/Components/bundles-ollama.mdx b/docs/docs/Components/bundles-ollama.mdx
index 65c016b57648..8ffff1fa034e 100644
--- a/docs/docs/Components/bundles-ollama.mdx
+++ b/docs/docs/Components/bundles-ollama.mdx
@@ -32,7 +32,7 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn
5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model.
- Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. For more information, see [Language model components](/components-models).
+ Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models).
In the following example, the flow uses `LanguageModel` output to use an Ollama model as the LLM for an [**Agent** component](/components-agents).
diff --git a/docs/docs/Components/bundles-openai.mdx b/docs/docs/Components/bundles-openai.mdx
index 6e98e92df8d6..bbf735bb8f31 100644
--- a/docs/docs/Components/bundles-openai.mdx
+++ b/docs/docs/Components/bundles-openai.mdx
@@ -20,7 +20,7 @@ It provides access to the same OpenAI models that are available in the core **La
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-openrouter.mdx b/docs/docs/Components/bundles-openrouter.mdx
index e35c1c52782f..ae67e0d0d165 100644
--- a/docs/docs/Components/bundles-openrouter.mdx
+++ b/docs/docs/Components/bundles-openrouter.mdx
@@ -18,7 +18,7 @@ This component generates text using OpenRouter's unified API for multiple AI mod
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-perplexity.mdx b/docs/docs/Components/bundles-perplexity.mdx
index c46fd1230f1f..309c02d35eca 100644
--- a/docs/docs/Components/bundles-perplexity.mdx
+++ b/docs/docs/Components/bundles-perplexity.mdx
@@ -18,7 +18,7 @@ This component generates text using Perplexity's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-redis.mdx b/docs/docs/Components/bundles-redis.mdx
index 880a6a384a41..e8cd279a0f48 100644
--- a/docs/docs/Components/bundles-redis.mdx
+++ b/docs/docs/Components/bundles-redis.mdx
@@ -16,7 +16,7 @@ The **Redis Chat Memory** component retrieves and stores chat messages using Red
Chat memories are passed between memory storage components as the [`Memory`](/data-types#memory) data type.
-For more information about using external chat memory in flows, see the [**Message History** component](/components-helpers#message-history).
+For more information about using external chat memory in flows, see the [**Message History** component](/message-history).
### Redis Chat Memory parameters
diff --git a/docs/docs/Components/bundles-sambanova.mdx b/docs/docs/Components/bundles-sambanova.mdx
index 3bfc695e072b..37baeaca6bd9 100644
--- a/docs/docs/Components/bundles-sambanova.mdx
+++ b/docs/docs/Components/bundles-sambanova.mdx
@@ -18,7 +18,7 @@ This component generates text using SambaNova LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-searchapi.mdx b/docs/docs/Components/bundles-searchapi.mdx
index 6ddf1981d13c..1984f1c4baf9 100644
--- a/docs/docs/Components/bundles-searchapi.mdx
+++ b/docs/docs/Components/bundles-searchapi.mdx
@@ -33,7 +33,7 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
## See also
-* [**Web Search** component](/components-data#web-search)
+* [**Web Search** component](/web-search)
* [**Google** bundle](/bundles-google)
* [**Bing** bundle](/bundles-bing)
* [**DuckDuckGo** bundle](/bundles-duckduckgo)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-serper.mdx b/docs/docs/Components/bundles-serper.mdx
index 7a3d343ca636..b3899ecc93f0 100644
--- a/docs/docs/Components/bundles-serper.mdx
+++ b/docs/docs/Components/bundles-serper.mdx
@@ -27,7 +27,7 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe).
## See also
-* [**Web Search** component](/components-data#web-search)
+* [**Web Search** component](/web-search)
* [**Google** bundle](/bundles-google)
* [**Bing** bundle](/bundles-bing)
* [**DuckDuckGo** bundle](/bundles-duckduckgo)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-vertexai.mdx b/docs/docs/Components/bundles-vertexai.mdx
index deda5a04ea5b..70cd4430aa87 100644
--- a/docs/docs/Components/bundles-vertexai.mdx
+++ b/docs/docs/Components/bundles-vertexai.mdx
@@ -20,7 +20,7 @@ The **Vertex AI** component generates text using Google Vertex AI models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-wikipedia.mdx b/docs/docs/Components/bundles-wikipedia.mdx
index 7a0b04828607..dd6a56f5a5fb 100644
--- a/docs/docs/Components/bundles-wikipedia.mdx
+++ b/docs/docs/Components/bundles-wikipedia.mdx
@@ -40,4 +40,4 @@ This component searches and retrieves information from Wikipedia with the [WikiM
## See also
-* [**API Request** component](/components-data#api-request)
\ No newline at end of file
+* [**API Request** component](/api-request)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-xai.mdx b/docs/docs/Components/bundles-xai.mdx
index 896db788a093..01bc99026ad3 100644
--- a/docs/docs/Components/bundles-xai.mdx
+++ b/docs/docs/Components/bundles-xai.mdx
@@ -18,7 +18,7 @@ The **xAI** component generates text using xAI models like [Grok](https://x.ai/g
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/calculator.mdx b/docs/docs/Components/calculator.mdx
new file mode 100644
index 000000000000..fe77b152bb5e
--- /dev/null
+++ b/docs/docs/Components/calculator.mdx
@@ -0,0 +1,20 @@
+---
+title: Calculator
+slug: /calculator
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The **Calculator** component performs basic arithmetic operations on mathematical expressions.
+It supports addition, subtraction, multiplication, division, and exponentiation operations.
+
+For an example of using this component in a flow, see the [**Python Interpreter** component](/python-interpreter).
+
+## Calculator parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| expression | String | Input parameter. The arithmetic expression to evaluate, such as `4*4*(33/22)+12-20`. |
+| result | Data | Output parameter. The calculation result as a [`Data` object](/data-types) containing the evaluated expression. |
\ No newline at end of file
diff --git a/docs/docs/Components/components-io.mdx b/docs/docs/Components/chat-input-and-output.mdx
similarity index 73%
rename from docs/docs/Components/components-io.mdx
rename to docs/docs/Components/chat-input-and-output.mdx
index 87afab3348e3..60d763c412da 100644
--- a/docs/docs/Components/components-io.mdx
+++ b/docs/docs/Components/chat-input-and-output.mdx
@@ -1,21 +1,11 @@
---
-title: Input / Output
-slug: /components-io
+title: Chat Input and Output
+slug: /chat-input-and-output
---
import Icon from "@site/src/components/icon";
import PartialParams from '@site/docs/_partial-hidden-params.mdx';
-Input and output components define where data enters and exits your flow, but they don't have identical functionality.
-
-Specifically, **Chat Input and Output** components are designed to facilitate conversational interactions where messages are exchanged in a cumulative dialogue.
-The data handled by these components includes the message text plus additional metadata like senders, session IDs, and timestamps.
-
-In contrast, **Text Input and Output** components are designed for simple string input and output that doesn't require the additional context and metadata associated with chat messages.
-The data handled by these components is pared down to basic text strings.
-
-## Chat Input and Output {#chat-io}
-
:::warning
**Chat Input and Output** components are required to chat with your flow in the **Playground**.
For more information, see [Test flows in the Playground](/concepts-playground).
@@ -23,14 +13,14 @@ For more information, see [Test flows in the Playground](/concepts-playground).
**Chat Input and Output** components are designed to handle conversational interactions in Langflow.
-### Chat Input
+## Chat Input
The **Chat Input** component accepts text and file input, such as a chat message or a file.
This data is passed to other components as [`Message` data](/data-types) containing the provided input as well as associated chat metadata, such as the sender, session ID, timestamp, and file attachments.
Initial input should _not_ be provided as a complete `Message` object because the **Chat Input** component constructs the `Message` object that is then passed to other components in the flow.
-#### Chat Input parameters
+### Chat Input parameters
@@ -71,7 +61,7 @@ message = await Message.create(
-### Chat Output
+## Chat Output
The **Chat Output** component ingests `Message`, `Data`, or `DataFrame` data from other components, transforms it into `Message` data if needed, and then emits the final output as a chat message.
For information about these data types, see [Use Langflow data types](/data-types).
@@ -83,7 +73,7 @@ When using the Langflow API, the API response includes the **Chat Output** `Mess
Langflow API responses can be extremely verbose, so your applications must include code to extract relevant data from the response to return to the user.
For an example, see the [Langflow quickstart](/get-started-quickstart).
-#### Chat Output parameters
+### Chat Output parameters
@@ -102,7 +92,7 @@ For an example, see the [Langflow quickstart](/get-started-quickstart).
For information about the resulting `Message` object, including input parameters that are directly mapped to `Message` attributes, see [`Message` data](/data-types#message).
-### Use Chat Input and Output components in a flow
+## Use Chat Input and Output components in a flow
To use the **Chat Input** and **Chat Output** components in a flow, connect them to components that accept or emit [`Message` data](/data-types#message).
@@ -153,34 +143,4 @@ curl --request POST \
}'
```
-For more information, see [Trigger flows with the Langflow API](/concepts-publish).
-
-## Text Input and Output {#text-io}
-
-:::warning
-**Text Input and Output** components aren't supported in the **Playground**.
-Because the data isn't formatted as a chat message, the data doesn't appear in the **Playground**, and you can't chat with your flow in the **Playground**.
-
-If you want to chat with a flow in the **Playground**, you must use the [**Chat Input and Output** components](#chat-io).
-:::
-
-**Text Input and Output** components are designed for flows that ingest or emit simple text strings.
-These components don't support full conversational interactions.
-
-Passing chat-like metadata to a **Text Input and Output** component doesn't change the component's behavior; the result is still a simple text string.
-
-### Text Input
-
-The **Text Input** component accepts a text string input that is passed to other components as [`Message` data](/data-types) containing only the provided input text string in the `text` attribute.
-
-It accepts only **Text** (`input_value`), which is the text supplied as input to the component.
-This can be entered directly into the component or passed as `Message` data from other components.
-
-Initial input _shouldn't_ be provided as a complete `Message` object because the **Text Input** component constructs the `Message` object that is then passed to other components in the flow.
-
-### Text Output
-
-The **Text Output** component ingests [`Message` data](/data-types#message) from other components, emitting only the `text` attribute in a simplified `Message` object.
-
-It accepts only **Text** (`input_value`), which is the text to be ingested and output as a string.
-This can be entered directly into the component or passed as `Message` data from other components.
\ No newline at end of file
+For more information, see [Trigger flows with the Langflow API](/concepts-publish).
\ No newline at end of file
diff --git a/docs/docs/Components/components-agents.mdx b/docs/docs/Components/components-agents.mdx
index bb3c850433f8..50897adb59b9 100644
--- a/docs/docs/Components/components-agents.mdx
+++ b/docs/docs/Components/components-agents.mdx
@@ -5,14 +5,14 @@ slug: /components-agents
import PartialAgentsWork from '@site/docs/_partial-agents-work.mdx';
-Langflow's **Agent** and **MCP Tools** components are critical for building agent flows.
-These components define the behavior and capabilities of AI agents in your flows.
+Langflow's **Agent** component is critical for building agent flows.
+This component defines the behavior and capabilities of AI agents in your flows.
## Examples of agent flows
-For examples of flows using the **Agent** and **MCP Tools** components, see the following:
+For examples of flows using the **Agent** component, see the following:
* [Langflow quickstart](/get-started-quickstart): Start with the **Simple Agent** template, modify its tools, and then learn how to use an agent flow in an application.
@@ -21,7 +21,7 @@ For examples of flows using the **Agent** and **MCP Tools** components, see the
* [Use an agent as a tool](/agents-tools#use-an-agent-as-a-tool): Create a multi-agent flow.
-* [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server): Use the **Agent** and **MCP Tools** components to implement the Model Context Protocol (MCP) in your flows.
+* [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server): Use the **Agent** and [**MCP Tools** component](/mcp-tools) to implement the Model Context Protocol (MCP) in your flows.
## Agent component {#agent-component}
@@ -29,30 +29,14 @@ The **Agent** component is the primary agent actor in your agent flows.
This component uses an LLM integration to respond to input, such as a chat message or file upload.
The agent can use the tools already available in the base LLM as well as additional tools that you connect to the **Agent** component's **Tools** port.
-You can connect any Langflow component as a tool, including other **Agent** components and MCP servers through the [**MCP Tools** component](#mcp-connection).
+You can connect any Langflow component as a tool, including other **Agent** components and MCP servers through the [**MCP Tools** component](/mcp-tools).
For more information about using this component, see [Use Langflow agents](/agents).
-## MCP Tools component {#mcp-connection}
-
-The **MCP Tools** component connects to a Model Context Protocol (MCP) server and exposes the MCP server's functions as tools for Langflow agents to use to respond to input.
-
-In addition to publicly available MCP servers and your own custom-built MCP servers, you can connect Langflow MCP servers, which allow your agent to use your Langflow flows as tools.
-To do this, use the **MCP Tools** component's [SSE mode](/mcp-client#mcp-sse-mode) to connect to your Langflow project's MCP server at the `/api/v1/mcp/sse` endpoint.
-
-For more information, see [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server).
-
-
-Earlier versions of the MCP Tools component
-
-* In Langflow version 1.5, the **MCP Connection** component was renamed to the **MCP Tools** component.
-* In Langflow version 1.3, the **MCP Tools (stdio)** and **MCP Tools (SSE)** components were removed and replaced by the unified **MCP Connection** component, which was later renamed to **MCP Tools**.
-
-
-
## See also
-* [**Message History** component](/components-helpers#message-history)
+* [**MCP Tools** component](/mcp-tools)
+* [**Message History** component](/message-history)
* [Store chat memory](/memory#store-chat-memory)
* [Bundles](/components-bundle-components)
* [Legacy LangChain components](/bundles-langchain#legacy-langchain-components)
\ No newline at end of file
diff --git a/docs/docs/Components/components-bundles.mdx b/docs/docs/Components/components-bundles.mdx
index a394ec839ec6..5ff036e9b5bb 100644
--- a/docs/docs/Components/components-bundles.mdx
+++ b/docs/docs/Components/components-bundles.mdx
@@ -224,7 +224,7 @@ The following parameters are available in **Retrieve** mode:
Zep Chat Memory
The **Zep Chat Memory** component is a legacy component.
-Replace this component with the [**Message History** component](/components-helpers#message-history).
+Replace this component with the [**Message History** component](/message-history).
This component creates a `ZepChatMessageHistory` instance, enabling storage and retrieval of chat messages using Zep, a memory server for LLMs.
diff --git a/docs/docs/Components/components-custom-components.mdx b/docs/docs/Components/components-custom-components.mdx
index d58e30547a7b..25ac6ab09204 100644
--- a/docs/docs/Components/components-custom-components.mdx
+++ b/docs/docs/Components/components-custom-components.mdx
@@ -6,128 +6,300 @@ slug: /components-custom-components
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+import PartialBasicComponentStructure from '../_partial-basic-component-structure.mdx';
-Custom components extend Langflow's functionality through Python classes that inherit from `Component`. This enables integration of new features, data manipulation, external services, and specialized tools.
+Create your own custom components to add any functionality you need to Langflow, from API integrations to data processing.
-In Langflow's node-based environment, each node is a "component" that performs discrete functions. Custom components are Python classes which define:
+In Langflow's node-based environment, each node is a "component" that performs discrete functions.
+Custom components in Langflow are built upon:
-* **Inputs** — Data or parameters your component requires.
-* **Outputs** — Data your component provides to downstream nodes.
-* **Logic** — How you process inputs to produce outputs.
+* The Python class that inherits from `Component`.
+* Class-level attributes that identify and describe the component.
+* [Input and output lists](#inputs-and-outputs) that determine data flow.
+* Methods that define the component's behavior and logic.
+* Internal variables for [Error handling and logging](#error-handling-and-logging)
-The benefits of creating custom components include unlimited extensibility, reusability, automatic field generation in the visual editor based on inputs, and type-safe connections between nodes.
+Use the [Custom component quickstart](#quickstart) to add an example component to Langflow, and then use the reference guide that follows for more advanced component customization.
-Create custom components for performing specialized tasks, calling APIs, or adding advanced logic.
+## Custom component quickstart {#quickstart}
-Custom components in Langflow are built upon:
+Create a custom `DataFrameProcessor` component by creating a Python file, saving it in the correct folder, including an `__init__.py` file, and loading it into Langflow.
-* The Python class that inherits from `Component`.
-* Class-level attributes that identify and describe the component.
-* Input and output lists that determine data flow.
-* Internal variables for logging and advanced logic.
+### Create a Python file
-## Class-level attributes
+
-Define these attributes to control a custom component's appearance and behavior:
+### Save the custom component {#custom-component-path}
-```python
-class MyCsvReader(Component):
- display_name = "CSV Reader"
- description = "Reads CSV files"
- icon = "file-text"
- name = "CSVReader"
- documentation = "http://docs.example.com/csv_reader"
+Save the custom component in the Langflow directory where the UI will discover and load it.
+
+By default, Langflow looks for custom components in the `src/lfx/src/lfx/components` directory.
+
+When saving components in the default directory, components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor.
+
+Components must be placed inside category folders, not directly in the base directory.
+
+The category folder name determines where the component appears in the Langflow **Core components** menu.
+For example, to add the example `DataFrameProcessor` component to the **Data** category, place it in the `data` subfolder:
+
+```
+src/lfx/src/lfx/components/
+ └── data/ # Category folder (determines menu location)
+ ├── __init__.py # Required - makes it a Python package
+ └── dataframe_processor.py # Your custom component file
```
-* `display_name`: A user-friendly label shown in the visual editor.
-* `description`: A brief summary shown in tooltips and printed below the component name when added to a flow.
-* `icon`: A decorative icon from Langflow's icon library, printed next to the name.
+If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be similarly organized in a specific directory structure to be displayed in the visual editor.
- Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
+```
+/your/custom/components/path/ # Base directory set by LANGFLOW_COMPONENTS_PATH
+ └── category_name/
+ ├── __init__.py
+ └── custom_component.py
+```
+
+You can have multiple category folders to organize components into different categories:
+```
+/app/custom_components/
+ ├── data/
+ │ ├── __init__.py
+ │ └── dataframe_processor.py
+ └── tools/
+ ├── __init__.py
+ └── custom_tool.py
+```
-* `name`: A unique internal identifier, typically the same name as the folder containing your component code.
-* `documentation`: An optional link to external documentation, such as API or product documentation.
+### Create the `__init__.py` file
+
+Each category directory **must** contain an `__init__.py` file for Langflow to properly recognize and load the components.
+This is a Python package requirement that ensures the directory is treated as a module.
-### Structure of a custom component
+To include the `DataFrameProcessor` component, create a file named `__init__.py` in your component's directory with the following content.
-A Langflow custom component is more than a class with inputs and outputs. It includes an internal structure with optional lifecycle steps, output generation, front-end interaction, and logic organization.
+```python
+from .dataframe_processor import DataFrameProcessor
-A basic component:
+__all__ = ["DataFrameProcessor"]
+```
-* Inherits from `langflow.custom.Component`.
-* Declares metadata like `display_name`, `description`, `icon`, and more.
-* Defines `inputs` and `outputs` lists.
-* Implements methods matching output specifications.
+
+Lazy load the DataFrameProcessor component
-A minimal custom component skeleton contains the following:
+Alternatively, you can load your component **lazily**, which is better for performance but a little more complex.
```python
-from langflow.custom import Component
-from langflow.template import Output
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Any
-class MyComponent(Component):
- display_name = "My Component"
- description = "A short summary."
- icon = "sparkles"
- name = "MyComponent"
+from lfx.components._importing import import_mod
- inputs = []
- outputs = []
+if TYPE_CHECKING:
+ from lfx.components.data.dataframe_processor import DataFrameProcessor
+
+_dynamic_imports = {
+ "DataFrameProcessor": "dataframe_processor",
+}
+
+__all__ = [
+ "DataFrameProcessor",
+]
+
+def __getattr__(attr_name: str) -> Any:
+ """Lazily import data components on attribute access."""
+ if attr_name not in _dynamic_imports:
+ msg = f"module '{__name__}' has no attribute '{attr_name}'"
+ raise AttributeError(msg)
+ try:
+ result = import_mod(attr_name, _dynamic_imports[attr_name], __spec__.parent)
+ except (ModuleNotFoundError, ImportError, AttributeError) as e:
+ msg = f"Could not import '{attr_name}' from '{__name__}': {e}"
+ raise AttributeError(msg) from e
+ globals()[attr_name] = result
+ return result
+
+def __dir__() -> list[str]:
+ return list(__all__)
+```
+
+For an additional example of lazy loading, see the [FAISS component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/FAISS/__init__.py).
+
+
+
+### Load your component
+
+Ensure the application builds your component.
+
+1. To rebuild the backend and frontend, run `make install_frontend && make build_frontend && make install_backend && uv run langflow run --port 7860`.
+
+2. Refresh the frontend application.
+Your new `DataFrameProcessor` component is available in the **Core components** menu under the **Data** category in the visual editor.
+
+### Docker deployment
+
+When running Langflow in Docker, mount your custom components directory and set the `LANGFLOW_COMPONENTS_PATH` environment variable in the `docker run` command to point to the custom components directory.
+
+```bash
+docker run -d \
+ --name langflow \
+ -p 7860:7860 \
+ -v ./custom_components:/app/custom_components \
+ -e LANGFLOW_COMPONENTS_PATH=/app/custom_components \
+ langflowai/langflow:latest
+```
+
+Create the same custom components directory structure as the example in [Save the custom component](#custom-component-path).
- def some_output_method(self):
- return ...
```
-### Internal Lifecycle and Execution Flow
+/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
+ └── data/
+ ├── __init__.py
+ └── dataframe_processor.py
+```
+
+## How components execute
Langflow's engine manages:
-* **Instantiation**: A component is created and internal structures are initialized.
-* **Assigning Inputs**: Values from the visual editor or connections are assigned to component fields.
-* **Validation and Setup**: Optional hooks like `_pre_run_setup`.
-* **Outputs Generation**: `run()` or `build_results()` triggers output methods.
+1. **Instantiation**: A component is created and internal structures are initialized.
+2. **Assigning Inputs**: Values from the visual editor or connections are assigned to component fields.
+3. **Validation and Setup**: Optional hooks like `_pre_run_setup`.
+4. **Outputs Generation**: `run()` or `build_results()` triggers output methods.
-**Optional Hooks**:
+You can customize execution by overriding these optional hooks in your custom component code.
+
+* **`_pre_run_setup()`** - Used during **Validation and Setup**.
+ Add this method inside your component class to initialize component state before execution begins:
+ ```python
+ class MyComponent(Component):
+ # ... your inputs, outputs, and other attributes ...
+
+ def _pre_run_setup(self):
+ if not hasattr(self, "_initialized"):
+ self._initialized = True
+ self.iteration = 0
+ ```
-* `initialize_data` or `_pre_run_setup` can run setup logic before the component's main execution.
-* `__call__`, `run()`, or `_run()` can be overridden to customize how the component is called or to define custom execution logic.
+* **Override `run` or `_run`** - Used during **Outputs Generation**.
+ Add this method inside your component class to customize the main execution logic:
+ ```python
+ class MyComponent(Component):
-### Inputs and outputs
+ async def_run(self):
+ # Custom execution logic here
+ # This runs instead of the default output method calls
+ pass
+ ```
-Custom component inputs are defined with properties like:
+* **Store data in `self.ctx`**.
+ Use `self.ctx` in any of your component methods to share data between method calls.
+ ```python
+ class MyComponent(Component):
-* `name`, `display_name`
-* Optional: `info`, `value`, `advanced`, `is_list`, `tool_mode`, `real_time_refresh`
+ def _pre_run_setup(self):
+ # Initialize counter in setup
+ self.ctx["processed_items"] = 0
-For example:
+ def process_data(self) -> Data:
+ # Increment counter during processing
+ self.ctx["processed_items"] += 1
+ return Data(data={"item": f"processed {self.ctx['processed_items']}"})
-* `StrInput`: simple text input.
-* `DropdownInput`: selectable options.
-* `HandleInput`: specialized connections.
+ def get_summary(self) -> Data:
+ # Access counter in different method
+ total = self.ctx["processed_items"]
+ return Data(data={"summary": f"Processed {total} items total"})
+ ```
-Custom component `Output` properties define:
+## Inputs and outputs
-* `name`, `display_name`, `method`
-* Optional: `info`
+Inputs and outputs are **class-level configurations** that define how data flows through the component, how it appears in the visual editor, and how connections to other components are validated.
-For more information, see [Custom component inputs and outputs](/components-custom-components#custom-component-inputs-and-outputs).
+### Inputs
-### Associated Methods
+Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the visual editor. Users or other components provide values or connections to fill these inputs.
-Each output is linked to a method:
+An input is usually an instance of a class from `lfx.io` (such as `StrInput`, `DataInput`, or `MessageTextInput`).
-* The output method name must match the method name.
-* The method typically returns objects like Message, Data, or DataFrame.
-* The method can use inputs with `self.`.
+For example, this component has three inputs: a text field (`StrInput`), a Boolean toggle (`BoolInput`), and a dropdown selection (`DropdownInput`).
-For example:
+```python
+from lfx.io import StrInput, BoolInput, DropdownInput
+
+inputs = [
+ StrInput(name="title", display_name="Title"),
+ BoolInput(name="enabled", display_name="Enabled", value=True),
+ DropdownInput(name="mode", display_name="Mode", options=["Fast", "Safe", "Experimental"], value="Safe")
+]
+```
+
+The `StrInput` creates a single-line text field for entering text. The `name="title"` parameter means you access this value in your component methods with `self.title`, while `display_name="Title"` shows "Title" as the label in the visual editor.
+
+The `BoolInput` creates a boolean toggle that's enabled by default with `value=True`. Users can turn this on or off, and you access the current state with `self.enabled`.
+
+The `DropdownInput` provides a selection menu with three predefined options: "Fast", "Safe", and "Experimental".
+The `value="Safe"` sets "Safe" as the default selection, and you access the user's choice with `self.mode`.
+
+For a list of all available parameters, see the [BaseInputMixin definition](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/inputs/input_mixin.py) in the Langflow codebase.
+
+For a list of all available input types, see the [input type definitions](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/inputs/inputs.py) in the Langflow codebase.
+
+```python
+from lfx.io import StrInput, DataInput, MultilineInput, IntInput, BoolInput, DropdownInput, FileInput, CodeInput, ModelInput, HandleInput, Output
+```
+
+### Outputs
+
+Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the visual editor. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
+
+An output is usually an instance of `Output` from `lfx.io`.
+
+For example, this component has one `output` that returns a `DataFrame`:
+
+```python
+from lfx.io import Output
+from lfx.schema import DataFrame
+
+outputs = [
+ Output(
+ name="df_out",
+ display_name="DataFrame Output",
+ method="build_df"
+ )
+]
+
+def build_df(self) -> DataFrame:
+ # Process data and return DataFrame
+ df = DataFrame({"col1": [1, 2], "col2": [3, 4]})
+ self.status = f"Built DataFrame with {len(df)} rows."
+ return df
+```
+
+The `Output` creates a connector point in the visual editor labeled **DataFrame Output**. The `name="df_out"` parameter identifies this output, while `display_name="DataFrame Output"` shows the label in the UI. The `method="build_df"` parameter tells Langflow to call the `build_df` method when this output is connected to another component.
+
+The `build_df` method processes data and returns a `DataFrame`. The `-> DataFrame` type annotation helps Langflow validate connections and provides color-coding in the visual editor. You can also set `self.status` to show progress messages in the UI.
+
+For a complete list of all available parameters, see the [Output class definition](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/template/field/base.py) in the Langflow codebase. Common parameters include:
+
+**Additional return types:**
+* **`Message`**: Structured chat messages
+* **`Data`**: Flexible object with `.data` and optional `.text`
+* **`DataFrame`**: Tabular data (pandas DataFrame subclass)
+* **Primitive types**: `str`, `int`, `bool`, not recommended for type consistency
+
+#### Associated methods
+
+Each output is linked to a method where the output method name must match the method name. The method typically returns objects like `Message`, `Data`, or `DataFrame`, and can use inputs with `self.`.
+
+For example, the `Output` defines a connector point called `file_contents` that will call the `read_file` method when connected. The `read_file` method accesses the filename input with `self.filename`, reads the file content, sets a status message, and returns the content wrapped in a `Data` object.
```python
Output(
- display_name="File Contents",
name="file_contents",
+ display_name="File Contents",
method="read_file"
)
-#...
+
def read_file(self) -> Data:
path = self.filename
with open(path, "r") as f:
@@ -136,12 +308,13 @@ def read_file(self) -> Data:
return Data(data={"content": content})
```
-### Components with multiple outputs
+
+#### Components with multiple outputs
A component can define multiple outputs.
Each output can have a different corresponding method.
-For example:
+For example:
```python
outputs = [
Output(display_name="Processed Data", name="processed_data", method="process_data"),
@@ -149,8 +322,6 @@ outputs = [
]
```
-#### Output Grouping Behavior with `group_outputs`
-
By default, components in Langflow that produce multiple outputs only allow one output selection in the visual editor.
The component will have only one output port where the user can select the preferred output type.
@@ -168,7 +339,7 @@ This behavior is controlled by the `group_outputs` parameter:
In this example, the visual editor provides a single output port, and the user can select one of the outputs.
-Since `group_outputs=False` is the default behavior, it doesn't need to be explicitly set in the component, as shown in this example:
+Since `group_outputs=False` is the default behavior, it doesn't need to be explicitly set in the component, as shown in this example.
```python
outputs = [
@@ -188,9 +359,7 @@ outputs = [
-In this example, all outputs are available simultaneously in the visual editor:
-
-2. `group_outputs=True`
+In this example, all outputs are available simultaneously in the visual editor.
```python
outputs = [
@@ -212,200 +381,7 @@ outputs = [
-### Common internal patterns
-
-#### `_pre_run_setup()`
-
-To initialize a custom component with counters set:
-
-```python
-def _pre_run_setup(self):
- if not hasattr(self, "_initialized"):
- self._initialized = True
- self.iteration = 0
-```
-
-#### Override `run` or `_run`
-You can override `async def _run(self): ...` to define custom execution logic, although the default behavior from the base class usually covers most cases.
-
-#### Store data in `self.ctx`
-Use `self.ctx` as a shared storage for data or counters across the component's execution flow:
-
-```python
-def some_method(self):
- count = self.ctx.get("my_count", 0)
- self.ctx["my_count"] = count + 1
-```
-
-## Directory structure requirements
-
-By default, Langflow looks for custom components in the `/components` directory.
-
-If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor:
-
-Each category directory **must** contain an `__init__.py` file for Langflow to properly recognize and load the components.
-This is a Python package requirement that ensures the directory is treated as a module.
-
-```
-/your/custom/components/path/ # Base directory set by LANGFLOW_COMPONENTS_PATH
- └── category_name/ # Required category subfolder that determines menu name
- ├── __init__.py # Required
- └── custom_component.py # Component file
-```
-
-Components must be placed inside category folders, not directly in the base directory.
-
-The category folder name determines where the component appears in the Langflow **Core components** menu.
-For example, to add a component to the **Helpers** category, place it in the `helpers` subfolder:
-
-```
-/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
- └── helpers/ # Displayed within the "Helpers" category
- ├── __init__.py # Required
- └── custom_component.py # Your component
-```
-
-You can have multiple category folders to organize components into different categories:
-```
-/app/custom_components/
- ├── helpers/
- │ ├── __init__.py
- │ └── helper_component.py
- └── tools/
- ├── __init__.py
- └── tool_component.py
-```
-
-This folder structure is required for Langflow to properly discover and load your custom components. Components placed directly in the base directory aren't loaded.
-
-```
-/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
- └── custom_component.py # Won't be loaded - missing category folder!
-```
-
-## Custom component inputs and outputs
-
-Inputs and outputs define how data flows through the component, how it appears in the visual editor, and how connections to other components are validated.
-
-### Inputs
-
-Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the visual editor. Users or other components provide values or connections to fill these inputs.
-
-An input is usually an instance of a class from `langflow.io` (such as `StrInput`, `DataInput`, or `MessageTextInput`). The most common constructor parameters are:
-
-* **`name`**: The internal variable name, accessed with `self.`.
-* **`display_name`**: The label shown to users in the visual editor.
-* **`info`** *(optional)*: A tooltip or short description.
-* **`value`** *(optional)*: The default value.
-* **`advanced`** *(optional)*: If `true`, moves the field into the "Advanced" section.
-* **`required`** *(optional)*: If `true`, forces the user to provide a value.
-* **`is_list`** *(optional)*: If `true`, allows multiple values.
-* **`input_types`** *(optional)*: Restricts allowed connection types (e.g., `["Data"]`, `["LanguageModel"]`).
-
-Here are the most commonly used input classes and their typical usage.
-
-**Text Inputs**: For simple text entries.
-* **`StrInput`** creates a single-line text field.
-* **`MultilineInput`** creates a multi-line text area.
-
-**Numeric and Boolean Inputs**: Ensures users can only enter valid numeric or Boolean data.
-* **`BoolInput`**, **`IntInput`**, and **`FloatInput`** provide fields for Boolean, integer, and float values, ensuring type consistency.
-
-**Dropdowns**: For selecting from predefined options, useful for modes or levels.
-* **`DropdownInput`**
-
-**Secrets**: A specialized input for sensitive data, ensuring input is hidden in the visual editor.
-* **`SecretStrInput`** for API keys and passwords.
-
-**Specialized Data Inputs**: Ensures type-checking and color-coded connections in the visual editor.
-* **`DataInput`** expects a `Data` object (typically with `.data` and optional `.text`).
-* **`MessageInput`** expects a `Message` object, used in chat or agent flows.
-* **`MessageTextInput`** simplifies access to the `.text` field of a `Message`.
-
-**Handle-Based Inputs**: Used to connect outputs of specific types, ensuring correct pipeline connections.
-- **`HandleInput`**
-
-**File Uploads**: Allows users to upload files directly through the visual editor or receive file paths from other components.
-- **`FileInput`**
-
-**Lists**: Set `is_list=True` to accept multiple values, ideal for batch or grouped operations.
-
-This example defines three inputs: a text field (`StrInput`), a Boolean toggle (`BoolInput`), and a dropdown selection (`DropdownInput`).
-
-```python
-from langflow.io import StrInput, BoolInput, DropdownInput
-
-inputs = [
- StrInput(name="title", display_name="Title"),
- BoolInput(name="enabled", display_name="Enabled", value=True),
- DropdownInput(name="mode", display_name="Mode", options=["Fast", "Safe", "Experimental"], value="Safe")
-]
-```
-
-### Outputs
-
-Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the visual editor. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
-
-An output is usually an instance of `Output` from `langflow.io`, with common parameters:
-
-* **`name`**: The internal variable name.
-* **`display_name`**: The label shown in the visual editor.
-* **`method`**: The name of the method called to produce the output.
-* **`info`** *(optional)*: Help text shown on hover.
-
-The method must exist in the class, and it is recommended to annotate its return type for better type checking.
-You can also set a `self.status` message inside the method to show progress or logs.
-
-**Common Return Types**:
-- **`Message`**: Structured chat messages.
-- **`Data`**: Flexible object with `.data` and optional `.text`.
-- **`DataFrame`**: Pandas-based tables (`langflow.schema.DataFrame`).
-- **Primitive types**: `str`, `int`, `bool` (not recommended if you need type/color consistency).
-
-In this example, the `DataToDataFrame` component defines its output using the outputs list. The `df_out` output is linked to the `build_df` method, so when connected to another component (node), Langflow calls this method and passes its returned `DataFrame` to the next node. This demonstrates how each output maps to a method that generates the actual output data.
-
-```python
-from langflow.custom import Component
-from langflow.io import DataInput, Output
-from langflow.schema import Data, DataFrame
-
-class DataToDataFrame(Component):
- display_name = "Data to DataFrame"
- description = "Convert multiple Data objects into a DataFrame"
- icon = "table"
- name = "DataToDataFrame"
-
- inputs = [
- DataInput(
- name="items",
- display_name="Data Items",
- info="List of Data objects to convert",
- is_list=True
- )
- ]
-
- outputs = [
- Output(
- name="df_out",
- display_name="DataFrame Output",
- method="build_df"
- )
- ]
-
- def build_df(self) -> DataFrame:
- rows = []
- for item in self.items:
- row_dict = item.data.copy() if item.data else {}
- row_dict["text"] = item.get_text() or ""
- rows.append(row_dict)
-
- df = DataFrame(rows)
- self.status = f"Built DataFrame with {len(rows)} rows."
- return df
-```
-
-
-### Tool Mode
+### Tool mode
Components that support **Tool Mode** can be used as standalone components (when _not_ in **Tool Mode**) or as tools for other components with a **Tools** input, such as **Agent** components.
@@ -422,79 +398,72 @@ inputs = [
]
```
-Langflow currently supports the following input types for **Tool Mode**:
+## Typed annotations
-* `DataInput`
-* `DataFrameInput`
-* `PromptInput`
-* `MessageTextInput`
-* `MultilineInput`
-* `DropdownInput`
+In Langflow, typed annotations allow Langflow to visually guide users and maintain flow consistency.
+Always annotate your output methods with return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper visual editor color-coding and validation.
+Use `Data`, `Message`, or `DataFrame` wrappers instead of returning plain structures for better consistency. Stay consistent with types across your components to make flows predictable and easier to build.
-## Typed annotations
+Typed annotations provide color-coding where outputs like `-> Data` or `-> Message` get distinct colors, automatic validation that blocks incompatible connections, and improved readability for users to quickly understand data flow between components.
-In Langflow, **typed annotations** allow Langflow to visually guide users and maintain flow consistency.
+### Common return types
-Typed annotations provide:
+
+
-* **Color-coding**: Outputs like `-> Data` or `-> Message` get distinct colors.
-* **Validation**: Langflow blocks incompatible connections automatically.
-* **Readability**: Developers can quickly understand data flow.
-* **Development tools**: Better code suggestions and error checking in your code editor.
+For chat-style outputs. Connects to any of several `Message`-compatible inputs.
-### Common Return Types
+```python
+def produce_message(self) -> Message:
+ return Message(text="Hello! from typed method!", sender="System")
+```
-* `Message`: For chat-style outputs. Connects to any of several `Message`-compatible inputs.
+
+
- ```python
- def produce_message(self) -> Message:
- return Message(text="Hello! from typed method!", sender="System")
- ```
+For structured data like dicts or partial texts. Connects only to `DataInput` (ports that accept `Data`).
-* `Data`: For structured data like dicts or partial texts. Connects only to `DataInput` (ports that accept `Data`).
+```python
+def get_processed_data(self) -> Data:
+ processed = {"key1": "value1", "key2": 123}
+ return Data(data=processed)
+```
- ```python
- def get_processed_data(self) -> Data:
- processed = {"key1": "value1", "key2": 123}
- return Data(data=processed)
- ```
+
+
-* `DataFrame`: For tabular data. Connects only to `DataFrameInput` (ports that accept `DataFrame`).
+For tabular data. Connects only to `DataFrameInput` (ports that accept `DataFrame`).
- ```python
- def build_df(self) -> DataFrame:
- pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
- return DataFrame(pdf)
- ```
+```python
+def build_df(self) -> DataFrame:
+ pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
+ return DataFrame(pdf)
+```
-* Primitive Types (`str`, `int`, `bool`): Returning primitives is allowed but wrapping in `Data` or `Message` is recommended for better consistency in the visual editor.
+
+
- ```python
- def compute_sum(self) -> int:
- return sum(self.numbers)
- ```
+Returning primitives is allowed, but wrapping in `Data` or `Message` is recommended for better consistency in the visual editor.
-### Tips for typed annotations
+```python
+def compute_sum(self) -> int:
+ return sum(self.numbers)
+```
-When using typed annotations, consider the following best practices:
+
+
-* **Always Annotate Outputs**: Specify return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper visual editor color-coding and validation.
-* **Wrap Raw Data**: Use `Data`, `Message`, or `DataFrame` wrappers instead of returning plain structures.
-* **Use Primitives Carefully**: Direct `str` or `int` returns are fine for simple flows, but wrapping improves flexibility.
-* **Annotate Helpers Too**: Even if internal, typing improves maintainability and clarity.
-* **Handle Edge Cases**: Prefer returning structured `Data` with error fields when needed.
-* **Stay Consistent**: Use the same types across your components to make flows predictable and easier to build.
## Enable dynamic fields
-In **Langflow**, dynamic fields allow inputs to change or appear based on user interactions. You can make an input dynamic by setting `dynamic=True`.
-Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual visual editor experience that only exposes relevant fields based on the user's choices.
+In **Langflow**, dynamic fields allow inputs to change or appear based on user interactions. You can make an input dynamic by setting `dynamic=True`. Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual visual editor experience that only exposes relevant fields based on the user's choices.
In this example, the operator field triggers updates with `real_time_refresh=True`.
The `regex_pattern` field is initially hidden and controlled with `dynamic=True`.
```python
-from langflow.io import DropdownInput, StrInput
+from lfx.custom import Component
+from lfx.io import DropdownInput, StrInput
class RegexRouter(Component):
display_name = "Regex Router"
@@ -518,11 +487,13 @@ class RegexRouter(Component):
]
```
-### Implement `update_build_config`
+### Show or hide fields based on user selections
-When a field with `real_time_refresh=True` is modified, Langflow calls the `update_build_config` method, passing the updated field name, value, and the component's configuration to dynamically adjust the visibility or properties of other fields based on user input.
+When a user changes a field with `real_time_refresh=True`, Langflow calls your `update_build_config` method.
-This example will show or hide the `regex_pattern` field when the user selects a different operator.
+This method lets you show, hide, or modify other fields based on what the user selected.
+
+This example shows the `regex_pattern` field only when the user selects "regex" from the operator dropdown.
```python
def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None) -> dict:
@@ -534,89 +505,84 @@ def update_build_config(self, build_config: dict, field_value: str, field_name:
return build_config
```
-### Additional Dynamic Field Controls
-
-You can also modify other properties within `update_build_config`, such as:
-* `required`: Set `build_config["some_field"]["required"] = True/False`
-
-* `advanced`: Set `build_config["some_field"]["advanced"] = True`
-
-* `options`: Modify dynamic dropdown options.
-
-### Tips for Managing Dynamic Fields
-
-When working with dynamic fields, consider the following best practices to ensure a smooth user experience:
-
-* **Minimize field changes**: Hide only fields that are truly irrelevant to avoid confusing users.
-* **Test behavior**: Ensure that adding or removing fields doesn't accidentally erase user input.
-* **Preserve data**: Use `build_config["some_field"]["show"] = False` to hide fields without losing their values.
-* **Clarify logic**: Add `info` notes to explain why fields appear or disappear based on conditions.
-* **Keep it manageable**: If the dynamic logic becomes too complex, consider breaking it into smaller components, unless it serves a clear purpose in a single node.
-
+You can modify additional field properties in `update_build_config` other than just `show` and `hide`.
+
+* **`required`**: Make fields required or optional dynamically
+ ```python
+ if field_value == "regex":
+ build_config["regex_pattern"]["required"] = True
+ else:
+ build_config["regex_pattern"]["required"] = False
+ ```
+
+* **`advanced`**: Move fields to the "Advanced" section
+ ```python
+ if field_value == "experimental":
+ build_config["regex_pattern"]["advanced"] = False # Show in main section
+ else:
+ build_config["regex_pattern"]["advanced"] = True # Hide in advanced
+ ```
+
+* **`options`**: Change dropdown options based on other selections
+ ```python
+ if field_value == "regex":
+ build_config["operator"]["options"] = ["regex", "contains", "starts_with"]
+ else:
+ build_config["operator"]["options"] = ["equals", "contains", "not_equals"]
+ ```
## Error handling and logging
-In Langflow, robust error handling ensures that your components behave predictably, even when unexpected situations occur, such as invalid inputs, external API failures, or internal logic errors.
+You can raise standard Python exceptions such as `ValueError` or specialized exceptions like `ToolException` when validation fails. Langflow automatically catches these and displays appropriate error messages in the visual editor, helping users quickly identify what went wrong.
-### Error handling techniques
+```python
+def compute_result(self) -> str:
+ if not self.user_input:
+ raise ValueError("No input provided.")
+ # ...
+```
-* **Raise Exceptions**: If a critical error occurs, you can raise standard Python exceptions such as `ValueError`, or specialized exceptions like `ToolException`. Langflow will automatically catch these and display appropriate error messages in the visual editor, helping users quickly identify what went wrong.
+Alternatively, instead of stopping a flow abruptly, you can return a `Data` object containing an `"error"` field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
- ```python
- def compute_result(self) -> str:
- if not self.user_input:
- raise ValueError("No input provided.")
+```python
+def run_model(self) -> Data:
+ try:
# ...
- ```
-
-* **Return Structured Error Data**: Instead of stopping a flow abruptly, you can return a Data object containing an "error" field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
-
- ```python
- def run_model(self) -> Data:
- try:
- # ...
- except Exception as e:
- return Data(data={"error": str(e)})
- ```
-
-### Improve debugging and flow management
+ except Exception as e:
+ return Data(data={"error": str(e)})
+```
-* **Use `self.status`**: Each component has a status field where you can store short messages about the execution result—such as success summaries, partial progress, or error notifications. These appear directly in the visual editor, making troubleshooting easier for users.
+Langflow provides several tools to help you debug and manage component execution. You can use `self.status` to display short messages about execution results directly in the visual editor, making troubleshooting easier for users.
- ```python
- def parse_data(self) -> Data:
- # ...
- self.status = f"Parsed {len(rows)} rows successfully."
- return Data(data={"rows": rows})
- ```
+```python
+def parse_data(self) -> Data:
+# ...
+self.status = f"Parsed {len(rows)} rows successfully."
+return Data(data={"rows": rows})
+```
-* **Stop specific outputs with `self.stop(...)`**: You can halt individual output paths when certain conditions fail, without affecting the entire component. This is especially useful when working with components that have multiple output branches.
+You can halt individual output paths when certain conditions fail using `self.stop()`, without stopping other outputs from the same component.
- ```python
- def some_output(self) -> Data:
- if :
- self.stop("some_output") # Tells Langflow no data flows
- return Data(data={"error": "Condition not met"})
- ```
+This example stops the output if the user input is empty, preventing the component from processing invalid data.
-* **Log events**: You can log key execution details inside components. Logs are displayed in the "Logs" or "Events" section of the component's detail view and can be accessed later through the flow's debug panel or exported files, providing a clear trace of the component's behavior for easier debugging.
+```python
+def some_output(self) -> Data:
+if not self.user_input or len(self.user_input.strip()) == 0:
+ self.stop("some_output")
+ return Data(data={"error": "Empty input provided"})
+```
- ```python
- def process_file(self, file_path: str):
- self.log(f"Processing file {file_path}")
- # ...
- ```
+You can log key execution details inside components using `self.log()`. These logs are stored as structured data and displayed in the "Logs" or "Events" section of the component's detail view, and can be accessed later through the **Logs** button in the visual editor or exported files.
-### Tips for error handling and logging
+Component logs are distinct from Langflow's main application logging system. `self.log()` creates component-specific logs that appear in the UI, while Langflow's main logging system uses [structlog](https://www.structlog.org) for application-level logging that outputs to `langflow.log` files. For more information, see [Logs](/logging).
-To build more reliable components, consider the following best practices:
+This example logs a message when the component starts processing a file.
-* **Validate inputs early**: Catch missing or invalid inputs at the start to prevent broken logic.
-* **Summarize with `self.status`**: Use short success or error summaries to help users understand results quickly.
-* **Keep logs concise**: Focus on meaningful messages to avoid cluttering the visual editor.
-* **Return structured errors**: When appropriate, return `Data(data={"error": ...})` instead of raising exceptions to allow downstream handling.
-* **Stop outputs selectively**: Only halt specific outputs with `self.stop(...)` if necessary, to preserve correct flow behavior elsewhere.
+```python
+def process_file(self, file_path: str):
+self.log(f"Processing file {file_path}")
+```
## Contribute custom components to Langflow
-See [How to Contribute](/contributing-components) to contribute your custom component to Langflow.
\ No newline at end of file
+To contribute your custom component to the Langflow project, see [Contribute components](/contributing-components).
\ No newline at end of file
diff --git a/docs/docs/Components/components-data.mdx b/docs/docs/Components/components-data.mdx
deleted file mode 100644
index b5b4682339c9..000000000000
--- a/docs/docs/Components/components-data.mdx
+++ /dev/null
@@ -1,605 +0,0 @@
----
-title: Data
-slug: /components-data
----
-
-import Icon from "@site/src/components/icon";
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import PartialParams from '@site/docs/_partial-hidden-params.mdx';
-import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
-
-Data components bring data into your flows from various sources like files, API endpoints, and URLs.
-For example:
-
-* **Load files**: Import data from a file or directory with the [**File** component](#file) and [**Directory** component](#directory).
-
-* **Search the web**: Fetch data from the web with components like the [**News Search** component](#news-search), [**RSS Reader** component](#rss-reader), [**Web Search** component](#web-search), and [**URL** component](#url).
-
-* **Make API calls**: Use APIs to trigger flows or perform actions with the [**API Request** component](#api-request) and [**Webhook** component](#webhook).
-
-* **Run SQL queries**: Query an SQL database with the [**SQL Database** component](#sql-database).
-
-Each component runs different commands for retrieval, processing, and type checking.
-Some components are a minimal wrapper for a command that you provide, and others include built-in scripts to fetch and process data based on variable inputs.
-Additionally, some components return raw data, whereas others can convert, restructure, or validate the data before outputting it.
-This means that some similar components might produce different results.
-
-:::tip
-Data components pair well with [Processing components](/components-processing) that can perform additional parsing, transformation, and validation after retrieving the data.
-
-This can include basic operations, like saving a file in a specific format, or more complex tasks, like using a **Text Splitter** component to break down a large document into smaller chunks before generating embeddings for vector search.
-:::
-
-## Use Data components in flows
-
-Data components are used often in flows because they offer a versatile way to perform common functions.
-
-You can use these components to perform their base functions as isolated steps in your flow, or you can connect them to an **Agent** component as tools.
-
-
-
-For example flows, see the following:
-
-* [Create a chatbot that can ingest files](/chat-with-files): Learn how to use a **File** component to load a file as context for a chatbot.
-The file and user input are both passed to the LLM so you can ask questions about the file you uploaded.
-
-* [Create a vector RAG chatbot](/chat-with-rag): Learn how to ingest files for use in Retrieval-Augmented Generation (RAG), and then set up a chatbot that can use the ingested files as context.
-The two flows in this tutorial prepare files for RAG, and then let your LLM use vector search to retrieve contextually relevant data during a chat session.
-
-* [Configure tools for agents](/agents-tools): Learn how to use any component as a tool for an agent.
-When used as tools, the agent autonomously decides when to call a component based on the user's query.
-
-* [Trigger flows with webhooks](/webhook): Learn how to use the **Webhook** component to trigger a flow run in response to an external event.
-
-## API Request
-
-The **API Request** component constructs and sends HTTP requests using URLs or curl commands:
-
-* **URL mode**: Enter one or more comma-separated URLs, and then select the method for the request to each URL.
-* **curl mode**: Enter the curl command to execute.
-
-You can enable additional request options and fields in the component's parameters.
-
-Returns a [`Data` object](/data-types#data) containing the response.
-
-For provider-specific API components, see [**Bundles**](/components-bundle-components).
-
-### API Request parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| mode | Mode | Input parameter. Set the mode to either **URL** or **curl**. |
-| urls | URL | Input parameter. Enter one or more comma-separated URLs for the request. |
-| curl | curl | Input parameter. **curl mode** only. Enter a complete curl command. Other component parameters are populated from the command arguments. |
-| method | Method | Input parameter. The HTTP method to use. |
-| query_params | Query Parameters | Input parameter. The query parameters to append to the URL. |
-| body | Body | Input parameter. The body to send with POST, PATCH, and PUT requests as a dictionary. |
-| headers | Headers | Input parameter. The headers to send with the request as a dictionary. |
-| timeout | Timeout | Input parameter. The timeout to use for the request. |
-| follow_redirects | Follow Redirects | Input parameter. Whether to follow HTTP redirects. The default is enabled (`true`). If disabled (`false`), HTTP redirects aren't followed. |
-| save_to_file | Save to File | Input parameter. Whether to save the API response to a temporary file. Default: Disabled (`false`) |
-| include_httpx_metadata | Include HTTPx Metadata | Input parameter. Whether to include properties such as `headers`, `status_code`, `response_headers`, and `redirection_history` in the output. Default: Disabled (`false`) |
-
-## Directory
-
-The **Directory** component recursively loads files from a directory, with options for file types, depth, and concurrency.
-
-Files must be of a [supported type and size](#file-type-and-size-limits) to be loaded.
-
-Outputs either a [`Data`](/data-types#data) or [`DataFrame`](/data-types#dataframe) object, depending on the directory contents.
-
-### Directory parameters
-
-
-
-| Name | Type | Description |
-| ------------------ | ---------------- | -------------------------------------------------- |
-| path | MessageTextInput | Input parameter. The path to the directory to load files from. Default: Current directory (`.`) |
-| types | MessageTextInput | Input parameter. The file types to load. Select one or more, or leave empty to attempt to load all files. |
-| depth | IntInput | Input parameter. The depth to search for files. |
-| max_concurrency | IntInput | Input parameter. The maximum concurrency for loading multiple files. |
-| load_hidden | BoolInput | Input parameter. If `true`, hidden files are loaded. |
-| recursive | BoolInput | Input parameter. If `true`, the search is recursive. |
-| silent_errors | BoolInput | Input parameter. If `true`, errors don't raise an exception. |
-| use_multithreading | BoolInput | Input parameter. If `true`, multithreading is used. |
-
-## File
-
-The **File** component loads and parses files, converts the content into a `Data`, `DataFrame`, or `Message` object.
-It supports multiple file types, provides parameters for parallel processing and error handling, and supports advanced parsing with the Docling library.
-
-You can add files to the **File** component in the visual editor or at runtime, and you can upload multiple files at once.
-For more information about uploading files and working with files in flows, see [File management](/concepts-file-management) and [Create a chatbot that can ingest files](/chat-with-files).
-
-### File type and size limits
-
-By default, the maximum file size is 1024 MB.
-To modify this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).
-
-
-Supported file types
-
-The following file types are supported by the **File** component.
-Use archive and compressed formats to bundle multiple files together, or use the [**Directory** component](#directory) to load all files in a directory.
-
-- `.bz2`
-- `.csv`
-- `.docx`
-- `.gz`
-- `.htm`
-- `.html`
-- `.json`
-- `.js`
-- `.md`
-- `.mdx`
-- `.pdf`
-- `.py`
-- `.sh`
-- `.sql`
-- `.tar`
-- `.tgz`
-- `.ts`
-- `.tsx`
-- `.txt`
-- `.xml`
-- `.yaml`
-- `.yml`
-- `.zip`
-
-
-
-If you need to load an unsupported file type, you must use a different component that supports that file type and, potentially, parses it outside Langflow, or you must convert it to a supported type before uploading it.
-
-For images, see [Upload images](/concepts-file-management#upload-images).
-
-For videos, see the **Twelve Labs** and **YouTube** [**Bundles**](/components-bundle-components).
-
-### File parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| path | Files | Input parameter. The path to files to load. Can be local or in [Langflow file management](/concepts-file-management). Supports individual files and bundled archives. |
-| file_path | Server File Path | Input parameter. A `Data` object with a `file_path` property pointing to a file in [Langflow file management](/concepts-file-management) or a `Message` object with a path to the file. Supersedes **Files** (`path`) but supports the same file types. |
-| separator | Separator | Input parameter. The separator to use between multiple outputs in `Message` format. |
-| silent_errors | Silent Errors | Input parameter. If `true`, errors in the component don't raise an exception. Default: Disabled (`false`). |
-| delete_server_file_after_processing | Delete Server File After Processing | Input parameter. If `true` (default), the **Server File Path** (`file_path`) is deleted after processing. |
-| ignore_unsupported_extensions | Ignore Unsupported Extensions | Input parameter. If enabled (`true`), files with unsupported extensions are accepted but not processed. If disabled (`false`), the **File** component either can throw an error if an unsupported file type is provided. The default is `true`. |
-| ignore_unspecified_files | Ignore Unspecified Files | Input parameter. If `true`, `Data` with no `file_path` property is ignored. If `false` (default), the component errors when a file isn't specified. |
-| concurrency_multithreading | Processing Concurrency | Input parameter. The number of files to process concurrently if multiple files are uploaded. Default is 1. Values greater than 1 enable parallel processing for 2 or more files. Ignored for single-file uploads and advanced parsing. |
-| advanced_parser | Advanced Parser | Input parameter. If `true`, enables [advanced parsing](#advanced-parsing). Only available for single-file uploads of compatible file types. Default: Disabled (`false`). |
-
-### Advanced parsing
-
-Starting in Langflow version 1.6, the **File** component supports advanced document parsing using the [Docling](https://docling-project.github.io/docling/) library for supported file types.
-
-To use advanced parsing, do the following:
-
-1. Complete the following prerequisites, if applicable:
-
- * **Install Langflow version 1.6 or later**: Earlier versions don't support advanced parsing with the **File** component. For upgrade guidance, see the [Release notes](/release-notes).
-
- * **Install Docling dependency on macOS Intel (x86_64)**: The Docling dependency isn't installed by default for macOS Intel (x86_64). Use the [Docling installation guide](https://docling-project.github.io/docling/installation/) to install the Docling dependency.
-
- For all other operating systems, the Docling dependency is installed by default.
-
- * **Enable Developer Mode for Windows**:
-
-
- Developer Mode isn't required for Langflow OSS on Windows.
-
-2. Add one valid file to the **File** component.
-
- :::info Advanced parsing limitations
- * Advanced parsing processes only one file.
- If you select multiple files, the **File** component processes the first file only, ignoring any additional files.
- To process multiple files with advanced parsing, pass each file to a separate **File** components, or use the dedicated [**Docling** components](/bundles-docling).
-
- * Advanced parsing can process any of the **File** component's supported file types except `.csv`, `.xlsx`, and `.parquet` files because it is designed for document processing, such as extracting text from PDFs.
- For structured data analysis, use the [**Parser** component](/components-processing#parser).
- :::
-
-3. Enable **Advanced Parsing**.
-
-4. Click **Controls** in the [component's header menu](/concepts-components#component-menus) to configure advanced parsing parameters, which are hidden by default:
-
- | Name | Display Name | Info |
- |------|--------------|------|
- | pipeline | Pipeline | Input parameter, advanced parsing. The Docling pipeline to use, either `standard` (default, recommended) or `vlm` (may produce inconsistent results). |
- | ocr_engine | OCR Engine | Input parameter, advanced parsing. The OCR parser to use if `pipeline` is `standard`. Options are `None` (default) or [`EasyOCR`](https://pypi.org/project/easyocr/). `None` means that no OCR engine is used, and this can produce inconsistent or broken results for some documents. This setting has no effect with the `vlm` pipeline. |
- | md_image_placeholder | Markdown Image Placeholder | Input parameter, advanced parsing. Defines the placeholder for image files if the output type is **Markdown**. Default: ``. |
- | md_page_break_placeholder | Markdown Page Break Placeholder | Input parameter, advanced parsing. Defines the placeholder for page breaks if the output type is **Markdown**. Default: `""` (empty string). |
- | doc_key | Document Key | Input parameter, advanced parsing. The key to use for the `DoclingDocument` column, which holds the structured information extracted from the source document. See [Docling Document](https://docling-project.github.io/docling/concepts/docling_document/) for details. Default: `doc`. |
-
- :::tip
- For additional Docling features, including other components and OCR parsers, use the [**Docling** bundle](/bundles-docling).
- :::
-
-### File output
-
-The output of the **File** component depends on the number of files loaded and whether advanced parsing is enabled.
-If multiple options are available, you can set the output type near the component's output port.
-
-
-
-
-If you run the **File** component with no file selected, it throws an error, or, if **Silent Errors** is enabled, produces no output.
-
-
-
-
-If advanced parsing is disabled and you upload one file, the following output types are available:
-
-- **Structured Content**: Available only for `.csv`, `.xlsx`, `.parquet`, and `.json` files.
-
- - For `.csv` files, produces a [`DataFrame`](/data-types#dataframe) representing the table data.
- - For `.json` files, produces a [`Data`](/data-types#data) object with the parsed JSON data.
-
-- **Raw Content**: A [`Message`](/data-types#message) containing the file's raw text content.
-
-- **File Path**: A [`Message`](/data-types#message) containing the path to the file in [Langflow file management](/concepts-file-management).
-
-
-
-
-If advanced parsing is enabled and you upload one file, the following output types are available:
-
-- **Structured Output**: A [`DataFrame`](/data-types#dataframe) containing the Docling-processed document data with text elements, page numbers, and metadata.
-
-- **Markdown**: A [`Message`](/data-types#message) containing the uploaded document contents in Markdown format with image placeholders.
-
-- **File Path**: A [`Message`](/data-types#message) containing the path to the file in [Langflow file management](/concepts-file-management).
-
-
-
-
-If you upload multiple files, the component outputs **Files**, which is a [`DataFrame`](/data-types#dataframe) containing the content and metadata of all selected files.
-
-[Advanced parsing](#advanced-parsing) doesn't support multiple files; it processes only the first file.
-
-
-
-
-## News Search
-
-The **News Search** component searches Google News through RSS, and then returns clean article data as a [`DataFrame`](/data-types#dataframe) containing article titles, links, publication dates, and summaries.
-The component's `clean_html` method parses the HTML content with the BeautifulSoup library, removes HTML markup, and strips whitespace to output clean data.
-
-For other RSS feeds, use the [**RSS Reader** component](#rss-reader), and for other searches use the [**Web Search** component](#web-search) or provider-specific [**Bundles**](/components-bundle-components).
-
-When used as a standard component in a flow, the **News Search** component must be connected to a component that accepts `DataFrame` input.
-You can connect the **News Search** component directly to a compatible component, or you can use a [Processing component](/components-processing) to convert or extract data of a different type between components.
-
-When used in **Tool Mode** with an **Agent** component, the **News Search** component can be connected directly to the **Agent** component's **Tools** port without converting the data.
-The agent decides whether to use the **News Search** component based on the user's query, and it can process the `DataFrame` output directly.
-
-### News Search parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| query | Search Query | Input parameter. Search keywords for news articles. |
-| hl | Language (hl) | Input parameter. Language code, such as en-US, fr, de. Default: `en-US`. |
-| gl | Country (gl) | Input parameter. Country code, such as US, FR, DE. Default: `US`. |
-| ceid | Country:Language (ceid) | Input parameter. Language, such as US:en, FR:fr. Default: `US:en`. |
-| topic | Topic | Input parameter. One of: `WORLD`, `NATION`, `BUSINESS`, `TECHNOLOGY`, `ENTERTAINMENT`, `SCIENCE`, `SPORTS`, `HEALTH`. |
-| location | Location (Geo) | Input parameter. City, state, or country for location-based news. Leave blank for keyword search. |
-| timeout | Timeout | Input parameter. Timeout for the request in seconds. |
-| articles | News Articles | Output parameter. A `DataFrame` with the key columns `title`, `link`, `published` and `summary`. |
-
-## RSS Reader
-
-The **RSS Reader** component fetches and parses RSS feeds from any valid RSS feed URL, and then returns the feed content as a [`DataFrame`](/data-types#dataframe) containing article titles, links, publication dates, and summaries.
-
-When used as a standard component in a flow, the **RSS Reader** component must be connected to a component that accepts `DataFrame` input.
-You can connect the **RSS Reader** component directly to a compatible component, or you can use a [Processing component](/components-processing) to convert or extract data of a different type between components.
-
-When used in **Tool Mode** with an **Agent** component, the **RSS Reader** component can be connected directly to the **Agent** component's **Tools** port without converting the data.
-The agent decides whether to use the **RSS Reader** component based on the user's query, and it can process the `DataFrame` output directly.
-
-### RSS Reader parameters
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| rss_url | RSS Feed URL | Input parameter. URL of the RSS feed to parse, such as `https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml`. |
-| timeout | Timeout | Input parameter. Timeout for the RSS feed request in seconds. Default: `5`. |
-| articles | Articles | Output parameter. A `DataFrame` containing the key columns `title`, `link`, `published` and `summary`. |
-
-## SQL Database
-
-The **SQL Database** component executes SQL queries on [SQLAlchemy-compatible databases](https://docs.sqlalchemy.org/en/20/).
-It supports any SQLAlchemy-compatible database, such as PostgreSQL, MySQL, and SQLite.
-
-For CQL queries, see the [**DataStax** bundle](/bundles-datastax).
-
-### Query an SQL database with natural language prompts
-
-The following example demonstrates how to use the **SQL Database** component in a flow, and then modify the component to support natural language queries through an **Agent** component.
-
-This allows you to use the same **SQL Database** component for any query, rather than limiting it to a single manually entered query or requiring the user, application, or another component to provide valid SQL syntax as input.
-Users don't need to master SQL syntax because the **Agent** component translates the users' natural language prompts into SQL queries, passes the query to the **SQL Database** component, and then returns the results to the user.
-
-Additionally, input from applications and other components doesn't have to be extracted and transformed to exact SQL queries.
-Instead, you only need to provide enough context for the agent to understand that it should create and run a SQL query according to the incoming data.
-
-1. Use your own sample database or create a test database.
-
-
- Create a test SQL database
-
- 1. Create a database called `test.db`:
-
- ```shell
- sqlite3 test.db
- ```
-
- 2. Add some values to the database:
-
- ```shell
- sqlite3 test.db "
- CREATE TABLE users (
- id INTEGER PRIMARY KEY,
- name TEXT,
- email TEXT,
- age INTEGER
- );
-
- INSERT INTO users (name, email, age) VALUES
- ('John Doe', 'john@example.com', 30),
- ('Jane Smith', 'jane@example.com', 25),
- ('Bob Johnson', 'bob@example.com', 35);
- "
- ```
-
- 3. Verify that the database has been created and contains your data:
-
- ```shell
- sqlite3 test.db "SELECT * FROM users;"
- ```
-
- The result should list the text data you entered in the previous step:
-
- ```shell
- 1|John Doe|john@example.com
- 2|Jane Smith|jane@example.com
- 3|John Doe|john@example.com
- 4|Jane Smith|jane@example.com
- ```
-
-
-
-2. Add an **SQL Database** component to your flow.
-
-3. In the **Database URL** field, add the connection string for your database, such as `sqlite:///test.db`.
-
- At this point, you can enter an SQL query in the **SQL Query** field or use the [port](/concepts-components#component-ports) to pass a query from another component, such as a **Chat Input** component.
- If you need more space, click **Expand** to open a full-screen text field.
-
- However, to make this component more dynamic in an agentic context, use an **Agent** component to transform natural language input to SQL queries, as explained in the following steps.
-
-4. Click the **SQL Database** component to expose the [component's header menu](/concepts-components#component-menus), and then enable **Tool Mode**.
-
- You can now use this component as a tool for an agent.
- In **Tool Mode**, no query is set in the **SQL Database** component because the agent will generate and send one if it determines that the tool is required to complete the user's request.
- For more information, see [Configure tools for agents](/agents-tools).
-
-5. Add an **Agent** component to your flow, and then enter your OpenAI API key.
-
- The default model is an OpenAI model.
- If you want to use a different model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
-
- If you need to execute highly specialized queries, consider selecting a model that is trained for tasks like advanced SQL queries.
- If your preferred model isn't in the **Agent** component's built-in model list, set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models).
-
-6. Connect the **SQL Database** component's **Toolset** output to the **Agent** component's **Tools** input.
-
- 
-
-7. Click **Playground**, and then ask the agent a question about the data in your database, such as `Which users are in my database?`
-
- The agent determines that it needs to query the database to answer the question, uses the LLM to generate an SQL query, and then uses the **SQL Database** component's `RUN_SQL_QUERY` action to run the query on your database.
- Finally, it returns the results in a conversational format, unless you provide instructions to return raw results or a different format.
-
- The following example queried a test database with little data, but with a more robust dataset you could ask more detailed or complex questions.
-
- ```text
- Here are the users in your database:
-
- 1. **John Doe** - Email: john@example.com
- 2. **Jane Smith** - Email: jane@example.com
- 3. **John Doe** - Email: john@example.com
- 4. **Jane Smith** - Email: jane@example.com
-
- It seems there are duplicate entries for the users.
- ```
-
-### SQL Database parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| database_url | Database URL | Input parameter. The SQLAlchemy-compatible database connection URL. |
-| query | SQL Query | Input parameter. The SQL query to execute, which can be entered directly, passed in from another component, or, in **Tool Mode**, automatically provided by an **Agent** component. |
-| include_columns | Include Columns | Input parameter. Whether to include column names in the result. The default is enabled (`true`). |
-| add_error | Add Error | Input parameter. If enabled, adds any error messages to the result, if any are returned. The default is disabled (`false`). |
-| run_sql_query | Result Table | Output parameter. The query results as a [`DataFrame`](/data-types#dataframe). |
-
-## URL
-
-The **URL** component fetches content from one or more URLs, processes the content, and returns it in various formats.
-It follows links recursively to a given depth, and it supports output in plain text or raw HTML.
-
-### URL parameters
-
-
-
-Some of the available parameters include the following:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| urls | URLs | Input parameter. One or more URLs to crawl recursively. In the visual editor, click **Add URL** to add multiple URLs. |
-| max_depth | Depth | Input parameter. Controls link traversal: how many "clicks" away from the initial page the crawler will go. A depth of 1 limits the crawl to the first page at the given URL only. A depth of 2 means the crawler crawls the first page plus each page directly linked from the first page, then stops. This setting exclusively controls link traversal; it doesn't limit the number of URL path segments or the domain. |
-| prevent_outside | Prevent Outside | Input parameter. If enabled, only crawls URLs within the same domain as the root URL. This prevents the crawler from accessing sites outside the given URL's domain, even if they are linked from one of the crawled pages. |
-| use_async | Use Async | Input parameter. If enabled, uses asynchronous loading which can be significantly faster but might use more system resources. |
-| format | Output Format | Input parameter. Sets the desired output format as **Text** or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
-| timeout | Timeout | Input parameter. Timeout for the request in seconds. |
-| headers | Headers | Input parameter. The headers to send with the request if needed for authentication or otherwise. |
-
-Additional input parameters are available for error handling and encoding.
-
-### URL output
-
-There are two settings that control the output of the **URL** component at different stages:
-
-* **Output Format**: This optional parameter controls the content extracted from the crawled pages:
-
- * **Text (default)**: The component extracts only the text from the HTML of the crawled pages.
- * **HTML**: The component extracts the entire raw HTML content of the crawled pages.
-
-* **Output data type**: In the component's output field (near the output port) you can select the structure of the outgoing data when it is passed to other components:
-
- * **Extracted Pages**: Outputs a [`DataFrame`](/data-types#dataframe) that breaks the crawled pages into columns for the entire page content (`text`) and metadata like `url` and `title`.
- * **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text or HTML from the crawled pages, including metadata, in a single block of text.
-
-When used as a standard component in a flow, the **URL** component must be connected to a component that accepts the selected output data type (`DataFrame` or `Message`).
-You can connect the **URL** component directly to a compatible component, or you can use a [**Type Convert** component](/components-processing#type-convert) to convert the output to another type before passing the data to other components if the data types aren't directly compatible.
-
-Processing components like the **Type Convert** component are useful with the **URL** component because it can extract a large amount of data from the crawled pages.
-For example, if you only want to pass specific fields to other components, you can use a [**Parser** component](/components-processing#parser) to extract only that data from the crawled pages before passing the data to other components.
-
-When used in **Tool Mode** with an **Agent** component, the **URL** component can be connected directly to the **Agent** component's **Tools** port without converting the data.
-The agent decides whether to use the **URL** component based on the user's query, and it can process the `DataFrame` or `Message` output directly.
-
-## Web Search
-
-The **Web Search** component performs a basic web search using DuckDuckGo's HTML scraping interface.
-For other search APIs, see [**Bundles**](/components-bundle-components).
-
-:::info
-The **Web Search** component uses web scraping that can be subject to rate limits.
-
-For production use, consider using another search component with more robust API support, such as provider-specific bundles.
-:::
-
-### Use the Web Search component in a flow
-
-The following steps demonstrate one way that you can use a **Web Search** component in a flow:
-
-1. Create a flow based on the **Basic Prompting** template.
-
-2. Add a **Web Search** component, and then enter a search query, such as `environmental news`.
-
-3. Add a [**Type Convert** component](/components-processing#type-convert), set the **Output Type** to **Message**, and then connect the **Web Search** component's output to the **Type Convert** component's input.
-
- By default, the **Web Search** component outputs a `DataFrame`.
- Because the **Prompt Template** component only accepts `Message` data, this conversion is required so that the flow can pass the search results to the **Prompt Template** component.
- For more information, see [Web Search output](#web-search-output).
-
-5. In the **Prompt Template** component's **Template** field, add a variable like `{searchresults}` or `{context}`.
-
- This adds a field to the **Prompt Template** component that you can use to pass the converted search results to the prompt.
- For more information, see [Define variables in prompts](/components-prompts#define-variables-in-prompts).
-
-6. Connect the **Type Convert** component's output to the new variable field on the **Prompt Template** component.
-
- 
-
-7. In the **Language Model** component, add your OpenAI API key, or select a different provider and model.
-
-8. Click **Playground**, and then enter `latest news`.
-
- The LLM processes the request, including the context passed through the **Prompt Template** component, and then prints the response in the **Playground** chat interface.
-
-
- Result
-
- The following is an example of a possible response.
- Your response may vary based on the current state of the web, your specific query, the model, and other factors.
-
- ```text
- Here are some of the latest news articles related to the environment:
- Ozone Pollution and Global Warming: A recent study highlights that ozone pollution is a significant global environmental concern, threatening human health and crop production while exacerbating global warming. Read more
- ...
- ```
-
-
-
-### Web Search parameters
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| query | Search Query | Input parameter. Keywords to search for. |
-| timeout | Timeout | Input parameter. Timeout for the web search request in seconds. Default: `5`. |
-| results | Search Results | Output parameter. Returns a `DataFrame` containing `title`, `links`, and `snippets`. For more information, see [Web Search output](#web-search-output). |
-
-### Web Search output
-
-The **Web Search** component outputs a [`DataFrame`](/data-types#dataframe) containing the key columns `title`, `links`, and `snippets`.
-
-When used as a standard component in a flow, the **Web Search** component must be connected to a component that accepts `DataFrame` input, or you must use a [**Type Convert** component](/components-processing#type-convert) to convert the output to `Data` or `Message` type before passing the data to other components.
-
-When used in **Tool Mode** with an **Agent** component, the **Web Search** component can be connected directly to the **Agent** component's **Tools** port without converting the data.
-The agent decides whether to use the **Web Search** component based on the user's query, and it can process the `DataFrame` output directly.
-
-## Webhook
-
-The **Webhook** component defines a webhook trigger that runs a flow when it receives an HTTP POST request.
-
-### Trigger the webhook
-
-When you add a **Webhook** component to your flow, a **Webhook curl** tab is added to the flow's [**API Access** pane](/concepts-publish#api-access).
-This tab automatically generates an HTTP POST request code snippet that you can use to trigger your flow through the **Webhook** component.
-For example:
-
-```bash
-curl -X POST \
- "http://$LANGFLOW_SERVER_ADDRESS/api/v1/webhook/$FLOW_ID" \
- -H 'Content-Type: application/json' \
- -H 'x-api-key: $LANGFLOW_API_KEY' \
- -d '{"any": "data"}'
-```
-
-For more information, see [Trigger flows with webhooks](/webhook).
-
-### Webhook parameters
-
-| Name | Display Name | Description |
-|------|--------------|-------------|
-| data | Payload | Input parameter. Receives a payload from external systems through HTTP POST requests. |
-| curl | curl | Input parameter. The curl command template for making requests to this webhook. |
-| endpoint | Endpoint | Input parameter. The endpoint URL where this webhook receives requests. |
-| output_data | Data | Output parameter. The processed data from the webhook input. Returns an empty [`Data`](/data-types#data) object if no input is provided. If the input isn't valid JSON, the **Webhook** component wraps it in a `payload` object so that it can be accepted as input to trigger the flow. |
-
-## Additional Data components
-
-Langflow's core components are meant to be generic and support a range of use cases.
-Core components typically aren't limited to a single provider.
-
-If the core components don't meet your needs, you can find provider-specific components in [**Bundles**](/components-bundle-components).
-
-For example, the [**DataStax** bundle](/bundles-datastax) includes components for CQL queries, and the [**Google** bundle](/bundles-google) includes components for Google Search APIs.
-
-## Legacy Data components
-
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
-
-
-
-The following Data components are in legacy status:
-
-* **Load CSV**
-* **Load JSON**
-
-Replace these components with the **File** component, which supports loading CSV and JSON files, as well as many other file types.
-
-## See also
-
-- [**Google** bundle](/bundles-google)
-- [**Composio** bundle](/bundles-composio)
-- [File management](/concepts-file-management)
\ No newline at end of file
diff --git a/docs/docs/Components/components-embedding-models.mdx b/docs/docs/Components/components-embedding-models.mdx
index 844f34995061..873029cf8dac 100644
--- a/docs/docs/Components/components-embedding-models.mdx
+++ b/docs/docs/Components/components-embedding-models.mdx
@@ -19,7 +19,7 @@ This flow loads a text file, splits the text into chunks, generates embeddings f

-1. Create a flow, add a **File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
+1. Create a flow, add a **Read File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
2. Add the **Embedding Model** core component, and then provide a valid OpenAI API key.
You can enter the API key directly or use a [global variable](/configuration-global-variables).
@@ -30,7 +30,7 @@ You can enter the API key directly or use a [**Bundles**](/components-bundle-components) or **Search** for your preferred provider to find additional embedding models, such as the [**Hugging Face Embeddings Inference** component](/bundles-huggingface#hugging-face-embeddings-inference).
:::
-3. Add a [**Split Text** component](/components-processing#split-text) to your flow.
+3. Add a [**Split Text** component](/split-text) to your flow.
This component splits text input into smaller chunks to be processed into embeddings.
4. Add a vector store component, such as the **Chroma DB** component, to your flow, and then configure the component to connect to your vector database.
@@ -38,11 +38,11 @@ This component stores the generated embeddings so they can be used for similarit
5. Connect the components:
- * Connect the **File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
+ * Connect the **Read File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
* Connect the **Split Text** component's **Chunks** output to the vector store component's **Ingest Data** input.
* Connect the **Embedding Model** component's **Embeddings** output to the vector store component's **Embedding** input.
-6. To query the vector store, add [**Chat Input and Output** components](/components-io#chat-io):
+6. To query the vector store, add [**Chat Input and Output** components](/chat-input-and-output):
* Connect the **Chat Input** component to the vector store component's **Search Query** input.
* Connect the vector store component's **Search Results** output to the **Chat Output** component.
diff --git a/docs/docs/Components/components-logic.mdx b/docs/docs/Components/components-logic.mdx
deleted file mode 100644
index ce5f9f1da2f9..000000000000
--- a/docs/docs/Components/components-logic.mdx
+++ /dev/null
@@ -1,278 +0,0 @@
----
-title: Logic
-slug: /components-logic
----
-
-import Icon from "@site/src/components/icon";
-import PartialParams from '@site/docs/_partial-hidden-params.mdx';
-
-Logic components provide functionalities for routing, conditional processing, and flow management.
-
-## If-Else (conditional router) {#if-else}
-
-The **If-Else** component is a conditional router that routes messages by comparing two strings.
-It evaluates a condition by comparing two text inputs using the specified operator, and then routes the message to `true_result` or `false_result` depending on the evaluation.
-
-The operator looks for single strings in the input (`input_text`) based on an operator and match text (`match_text`), but it can also search for multiple words by matching a regex.
-Available operators include:
-
-- **equals**: Exact match comparison
-- **not equals**: Inverse of exact match
-- **contains**: Checks if the `match_text` is found within `input_text`
-- **starts with**: Checks if `input_text` begins with `match_text`
-- **ends with**: Checks if `input_text` ends with `match_text`
-- **regex**: Matches on a case-sensitive pattern
-
-By default, all operators are case insensitive except **regex**.
-**regex** is always case sensitive, and you can enable case sensitivity for all other operators in the [If-Else parameters](#if-else-parameters).
-
-### Use the If-Else component in a flow
-
-The following example uses the **If-Else** component to check incoming chat messages with regex matching, and then output a different response depending on whether the match evaluated to true or false.
-
-
-
-1. Add an **If-Else** component to your flow, and then configure it as follows:
-
- * **Text Input**: Connect the **Text Input** port to a **Chat Input** component or another `Message` input.
-
- If your input isn't in `Message` format, you can use another component to transform it, such as the [**Type Convert** component](/components-processing#type-convert) or [**Parser** component](/components-processing#parser).
- If your input isn't appropriate for `Message` format, consider using another component for conditional routing, such as the [**Data Operations** component](/components-processing#data-operations).
-
- * **Match Text**: Enter `.*(urgent|warning|caution).*` so the component looks for these values in incoming input. The regex match is case sensitive, so if you need to look for all permutations of `warning`, enter `warning|Warning|WARNING`.
-
- * **Operator**: Select **regex**.
-
- * **Case True**: In the [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
-
- The **Case True** message is sent from the **True** output port when the match condition evaluates to true.
-
- No message is set for **Case False** so the component doesn't emit a message when the condition evaluates to false.
-
-3. Depending on what you want to happen when the outcome is **True**, add components to your flow to execute that logic:
-
- 1. Add a **Language Model**, **Prompt Template**, and **Chat Output** component to your flow.
-
- 2. In the **Language Model** component, enter your OpenAI API key or select a different provider and model.
-
- 3. Connect the **If-Else** component's **True** output port to the **Language Model** component's **Input** port.
-
- 4. In the **Prompt Template** component, enter instructions for the model when the evaluation is true, such as `Send a message that a new warning, caution, or urgent message was received`.
-
- 5. Connect the **Prompt Template** component to the **Language Model** component's **System Message** port.
-
- 6. Connect the **Language Model** component's output to the **Chat Output** component.
-
-4. Repeat the same process with another set of **Language Model**, **Prompt Template**, and **Chat Output** components for the **False** outcome.
-
- Connect the **If-Else** component's **False** output port to the second **Language Model** component's **Input** port.
- In the second **Prompt Template**, enter instructions for the model when the evaluation is false, such as `Send a message that a new low-priority message was received`.
-
-5. To test the flow, open the **Playground**, and then send the flow some messages with and without your regex strings.
-The chat output should reflect the instructions in your prompts based on the regex evaluation.
-
- ```text
- User: A new user was created.
-
- AI: A new low-priority message was received.
-
- User: Sign-in warning: new user locked out.
-
- AI: A new warning, caution, or urgent message was received. Please review it at your earliest convenience.
- ```
-
-### If-Else parameters
-
-
-
-| Name | Type | Description |
-|----------------|----------|-------------------------------------------------------------------|
-| input_text | String | Input parameter. The primary text input for the operation. |
-| match_text | String | Input parameter. The text to compare against. |
-| operator | Dropdown | Input parameter. The operator used to compare texts. Options include `equals`, `not equals`, `contains`, `starts with`, `ends with`, and `regex`. The default is `equals`. |
-| case_sensitive | Boolean | Input parameter. When `true`, the comparison is case sensitive. The default is `false`. This setting doesn't apply to regex comparisons. |
-| max_iterations | Integer | Input parameter. The maximum number of iterations allowed for the conditional router. The default is 10. |
-| default_route | Dropdown | Input parameter. The route to take when max iterations are reached. Options include `true_result` or `false_result`. The default is `false_result`. |
-| true_result | Message | Output parameter. The output produced when the condition is true. |
-| false_result | Message | Output parameter. The output produced when the condition is false. |
-
-## Loop
-
-The **Loop** component iterates over a list of input by passing individual items to other components attached at the **Item** output port until there are no items left to process.
-Then, the **Loop** component passes the aggregated result of all looping to the component connected to the **Done** port.
-
-### The looping process
-
-The **Loop** component is like a miniature flow within your flow.
-Here's a breakdown of the looping process:
-
-1. Accepts a list of [`Data`](/data-types#data) or [`DataFrame`](/data-types#dataframe) objects, such as a CSV file, through the **Loop** component's **Inputs** port.
-
-2. Splits the input into individual items. For example, a CSV file is broken down by rows.
-
- Specifically, the **Loop** component repeatedly extracts items by `text` key in the `Data` or `DataFrame` objects until there are no more items to extract.
- Each `item` output is a `Data` objects.
-
-3. Iterates over each `item` by passing them to the **Item** output port.
-
- This port connects to one or more components that perform actions on each item.
- The final component in the **Item** loop connects back to the **Loop** component's **Looping** port to process the next item.
-
- Only one component connects to the **Item** port, but you can pass the data through as many components as you need, as long as the last component in the chain connects back to the **Looping** port.
-
- The **If-Else** component isn't compatible with the **Loop** component.
- For more information, see [Conditional looping](#conditional-looping).
-
-4. After processing all items, the results are aggregated into a single `Data` object that is passed from the **Loop** component's **Done** port to the next component in the flow.
-
-The following simplified Python code summarizes how the **Loop** component works.
-This _isn't_ the actual component code; it is only meant to help you understand the general process.
-
-```python
-for i in input: # Receive input data as a list
- process_item(i) # Process each item through components connected at the Item port
- if has_more_items():
- continue # Loop back to Looping port to process the next item
- else:
- break # Exit the loop when no more items are left
-
-done = aggregate_results() # Compile all returned items
-
-print(done) # Send the aggregated results from the Done port to another component
-```
-
-### Loop example
-
-In the follow example, the **Loop** component iterates over a CSV file until there are no rows left to process.
-In this case, the **Item** port passes each row to a **Type Convert** component to convert the row into a `Message` object, passes the `Message` to a **Structured Output** component to be processed into structured data that is then passed back to the **Loop** component's **Looping** port.
-After processing all rows, the **Loop** component loads the aggregated list of structured data into a Chroma DB database through the **Chroma DB** component connected to the **Done** port.
-
-
-
-:::tip
-For more examples of the **Loop** component, try the **Research Translation Loop** template in Langflow, or see the video tutorial [Mastering the Loop Component & Agentic RAG in Langflow](https://www.youtube.com/watch?v=9Wx7WODSKTo).
-:::
-
-### Conditional looping
-
-The **If-Else** component isn't compatible with the **Loop** component.
-If you need conditional loop events, redesign your flow to process conditions before the loop.
-For example, if you are looping over a `DataFrame`, you could use multiple [**DataFrame Operations** components](/components-processing#dataframe-operations) to conditionally filter data, and then run separate loops on each set of filtered data.
-
-
-
-## Notify and Listen
-
-The **Notify** and **Listen** components are used together.
-
-The **Notify** component builds a notification from the current flow's context, including specific data content and a status identifier.
-
-The resulting notification is sent to the **Listen** component.
-The notification data can then be passed to other components in the flow, such as the **If-Else** component.
-
-## Run flow
-
-The **Run Flow** component runs another Langflow flow as a subprocess of the current flow.
-
-You can use this component to chain flows together, run flows conditionally, and attach flows to [**Agent** components](/components-agents) as [tools for agents](/agents-tools) to run as needed.
-
-When used with an agent, the `name` and `description` metadata that the agent uses to register the tool are created automatically.
-
-When you select a flow for the **Run Flow** component, it uses the target flow's graph structure to dynamically generate input and output fields on the **Run Flow** component.
-
-### Run Flow parameters
-
-
-
-| Name | Type | Description |
-|-------------------|----------|----------------------------------------------------------------|
-| flow_name_selected| Dropdown | Input parameter. The name of the flow to run. |
-| session_id | String | Input parameter. The session ID for the flow run, if you want to pass a custom session ID for the subflow. |
-| flow_tweak_data | Dict | Input parameter. Dictionary of tweaks to customize the flow's behavior. Available tweaks depend on the selected flow. |
-| dynamic inputs | Various | Input parameter. Additional inputs are generated based on the selected flow. |
-| run_outputs | A `List` of types (`Data`, `Message`, or `DataFrame`) | Output parameter. All outputs are generated from running the flow. |
-
-## Legacy Logic components
-
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
-
-
-
-The following Logic components are in legacy status:
-
-
-Condition
-
-As an alternative to this legacy component, see the [**If-Else** component](#if-else).
-
-The **Condition** component routes `Data` objects based on a condition applied to a specified key, including Boolean validation.
-It supports `true_output` and `false_output` for routing the results based on the condition evaluation.
-
-This component is useful in workflows that require conditional routing of complex data structures, enabling dynamic decision-making based on data content.
-
-It can process either a single `Data` object or a list of `Data` objects.
-The following actions occur when processing a list of `Data` objects:
-
-- Each object in the list is evaluated individually.
-- Objects meeting the condition go to `true_output`.
-- Objects not meeting the condition go to `false_output`.
-- If all objects go to one output, the other output is empty.
-
-The **Condition** component accepts the following parameters:
-
-| Name | Type | Description |
-|---------------|----------|---------------------------------------------|
-| data_input | Data | Input parameter. The Data object or list of Data objects to process. This input can handle both single items and lists. |
-| key_name | String | Input parameter. The name of the key in the Data object to check. |
-| operator | Dropdown | Input parameter. The operator to apply. Options: `equals`, `not equals`, `contains`, `starts with`, `ends with`, `boolean validator`. Default: `equals`. |
-| compare_value | String | Input parameter. The value to compare against. Not shown/used when operator is `boolean validator`. |
-
-The `operator` options have the following behaviors:
-
-- `equals`: Exact match comparison between the key's value and compare_value.
-- `not equals`: Inverse of exact match.
-- `contains`: Checks if compare_value is found within the key's value.
-- `starts with`: Checks if the key's value begins with compare_value.
-- `ends with`: Checks if the key's value ends with compare_value.
-- `boolean validator`: Treats the key's value as a Boolean. The following values are considered true:
- - Boolean `true`.
- - Strings: `true`, `1`, `yes`, `y`, `on` (case-insensitive)
- - Any other value is converted using Python's `bool()` function
-
-
-
-
-Pass
-
-As an alternative to this legacy component, use the [**If-Else** component](#if-else) to pass a message without modification.
-
-The **Pass** component forwards the input message without modification.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_message | Input Message | Input parameter. The message to forward. |
-| ignored_message | Ignored Message | Input parameter. A second message that is ignored. Used as a workaround for continuity. |
-| output_message | Output Message | Output parameter. The forwarded message from the input. |
-
-
-
-
-Flow As Tool
-
-This component constructed a tool from a function that ran a loaded flow.
-
-It was deprecated in Langflow version 1.1.2 and replaced by the [**Run Flow** component](#run-flow).
-
-
-
-
-Sub Flow
-
-This component integrated entire flows as components within a larger workflow.
-It dynamically generated inputs based on the selected flow and executed the flow with provided parameters.
-
-It was deprecated in Langflow version 1.1.2 and replaced by the [**Run Flow** component](#run-flow).
-
-
\ No newline at end of file
diff --git a/docs/docs/Components/components-models.mdx b/docs/docs/Components/components-models.mdx
index 68bd501c4cd0..7128ef3e44f1 100644
--- a/docs/docs/Components/components-models.mdx
+++ b/docs/docs/Components/components-models.mdx
@@ -43,7 +43,7 @@ The following example uses a language model component in a chatbot flow similar
6. Connect the **Prompt Template** component's output to the **Language Model** component's **System Message** input.
-7. Add [**Chat Input** and **Chat Output** components](/components-io#chat-io) to your flow.
+7. Add [**Chat Input** and **Chat Output** components](/chat-input-and-output) to your flow.
These components are required for direct chat interaction with an LLM.
8. Connect the **Chat Input** component to the **Language Model** component's **Input**, and then connect the **Language Model** component's **Message** output to the **Chat Output** component.
@@ -95,7 +95,7 @@ For example, if you are using the **Language Model** core component, you could t
Some components use a language model component to perform LLM-driven actions.
Typically, these components prepare data for further processing by downstream components, rather than emitting direct chat output.
-For an example, see the [**Smart Function** component](/components-processing#smart-transform).
+For an example, see the [**Smart Transform** component](/smart-transform).
A component must accept a `LanguageModel` input to use a language model component as a driver, and you must set the language model component's output type to `LanguageModel`.
For more information, see [Language Model output types](#language-model-output-types).
@@ -155,10 +155,10 @@ Language model components, including the core component and bundled components,
* **Model Response**: The default output type emits the model's generated response as [`Message` data](/data-types#message).
Use this output type when you want the typical LLM interaction where the LLM produces a text response based on given input.
-* **Language Model**: Change the language model component's output type to [`LanguageModel`](/data-types#languagemodel) when you need to attach an LLM to another component in your flow, such as an **Agent** or **Smart Function** component.
+* **Language Model**: Change the language model component's output type to [`LanguageModel`](/data-types#languagemodel) when you need to attach an LLM to another component in your flow, such as an **Agent** or **Smart Transform** component.
With this configuration, the language model component supports an action completed by another component, rather than a direct chat interaction.
- For an example, the **Smart Function** component uses an LLM to create a function from natural language input.
+ For an example, the **Smart Transform** component uses an LLM to create a function from natural language input.
## Additional language models
diff --git a/docs/docs/Components/components-processing.mdx b/docs/docs/Components/components-processing.mdx
deleted file mode 100644
index 9b2a0330e337..000000000000
--- a/docs/docs/Components/components-processing.mdx
+++ /dev/null
@@ -1,1228 +0,0 @@
----
-title: Processing components
-slug: /components-processing
----
-
-import Icon from "@site/src/components/icon";
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import PartialParams from '@site/docs/_partial-hidden-params.mdx';
-import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
-
-Processing components process and transform data within a flow.
-They have many uses, including:
-
-* Feed instructions and context to your LLMs and agents with the [**Prompt Template** component](#prompt-template).
-* Extract content from larger chunks of data with a [**Parser** component](#parser).
-* Filter data with natural language with the [**Smart Function** component](#smart-transform).
-* Save data to your local machine with the [**Save File** component](#save-file).
-* Transform data into a different data type with the [**Type Convert** component](#type-convert) to pass it between incompatible components.
-
-## Prompt Template
-
-See [**Prompt Template** component](/components-prompts).
-
-## Batch Run
-
-The **Batch Run** component runs a language model over _each row of one text column_ in a [`DataFrame`](/data-types#dataframe), and then returns a new `DataFrame` with the original text and an LLM response.
-The output contains the following columns:
-
-* `text_input`: The original text from the input `DataFrame`
-* `model_response`: The model's response for each input
-* `batch_index`: The 0-indexed processing order for all rows in the `DataFrame`
-* `metadata` (optional): Additional information about the processing
-
-### Use the Batch Run component in a flow
-
-If you pass the **Batch Run** output to a [**Parser** component](/components-processing#parser), you can use variables in the parsing template to reference these keys, such as `{text_input}` and `{model_response}`.
-This is demonstrated in the following example.
-
-
-
-1. Connect any language model component to a **Batch Run** component's **Language model** port.
-
-2. Connect `DataFrame` output from another component to the **Batch Run** component's **DataFrame** input.
-For example, you could connect a **File** component with a CSV file.
-
-3. In the **Batch Run** component's **Column Name** field, enter the name of the column in the incoming `DataFrame` that contains the text to process.
-For example, if you want to extract text from a `name` column in a CSV file, enter `name` in the **Column Name** field.
-
-4. Connect the **Batch Run** component's **Batch Results** output to a **Parser** component's **DataFrame** input.
-
-5. Optional: In the **Batch Run** [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **System Message** parameter, click **Close**, and then enter an instruction for how you want the LLM to process each cell extracted from the file.
-For example, `Create a business card for each name.`
-
-6. In the **Parser** component's **Template** field, enter a template for processing the **Batch Run** component's new `DataFrame` columns (`text_input`, `model_response`, and `batch_index`):
-
- For example, this template uses three columns from the resulting, post-batch `DataFrame`:
-
- ```text
- record_number: {batch_index}, name: {text_input}, summary: {model_response}
- ```
-
-7. To test the processing, click the **Parser** component, click **Run component**, and then click **Inspect output** to view the final `DataFrame`.
-
- You can also connect a **Chat Output** component to the **Parser** component if you want to see the output in the **Playground**.
-
-### Batch Run parameters
-
-
-
-| Name | Type | Description |
-|------|------|-------------|
-| model | HandleInput | Input parameter. Connect the 'Language Model' output from a language model component. Required. |
-| system_message | MultilineInput | Input parameter. A multi-line system instruction for all rows in the DataFrame. |
-| df | DataFrameInput | Input parameter. The DataFrame whose column is treated as text messages, as specified by 'column_name'. Required. |
-| column_name | MessageTextInput | Input parameter. The name of the DataFrame column to treat as text messages. If empty, all columns are formatted in TOML. |
-| output_column_name | MessageTextInput | Input parameter. Name of the column where the model's response is stored. Default=`model_response`. |
-| enable_metadata | BoolInput | Input parameter. If `True`, add metadata to the output DataFrame. |
-| batch_results | DataFrame | Output parameter. A DataFrame with all original columns plus the model's response column. |
-
-## Data Operations
-
-The **Data Operations** component performs operations on [`Data`](/data-types#data) objects, including extracting, filtering, and editing keys and values in the `Data`.
-For all options, see [Available data operations](#available-data-operations).
-The output is a new `Data` object containing the modified data after running the selected operation.
-
-### Use the Data Operations component in a flow
-
-The following example demonstrates how to use a **Data Operations** component in a flow using data from a webhook payload:
-
-1. Create a flow with a **Webhook** component and a **Data Operations** component, and then connect the **Webhook** component's output to the **Data Operations** component's **Data** input.
-
- All operations in the **Data Operations** component require at least one `Data` input from another component.
- If the preceding component doesn't produce `Data` output, you can use another component, such as the **Type Convert** component, to reformat the data before passing it to the **Data Operations** component.
- Alternatively, you could consider using a component that is designed to process the original data type, such as the **Parser** or **DataFrame Operations** components.
-
-2. In the **Operations** field, select the operation you want to perform on the incoming `Data`.
-For this example, select the **Select Keys** operation.
-
- :::tip
- You can select only one operation.
- If you need to perform multiple operations on the data, you can chain multiple **Data Operations** components together to execute each operation in sequence.
- For more complex multi-step operations, consider using a component like the **Smart Function** component.
- :::
-
-3. Under **Select Keys**, add keys for `name`, `username`, and `email`.
-Click **Add more** to add a field for each key.
-
- For this example, assume that the webhook will receive consistent payloads that always contain `name`, `username`, and `email` keys.
- The **Select Keys** operation extracts the value of these keys from each incoming payload.
-
-4. Optional: If you want to view the output in the **Playground**, connect the **Data Operations** component's output to a **Chat Output** component.
-
- 
-
-5. To test the flow, send the following request to your flow's webhook endpoint.
-For more information about the webhook endpoint, see [Trigger flows with webhooks](/webhook).
-
- ```bash
- curl -X POST "http://$LANGFLOW_SERVER_URL/api/v1/webhook/$FLOW_ID" \
- -H "Content-Type: application/json" \
- -H "x-api-key: $LANGFLOW_API_KEY" \
- -d '{
- "id": 1,
- "name": "Leanne Graham",
- "username": "Bret",
- "email": "Sincere@april.biz",
- "address": {
- "street": "Main Street",
- "suite": "Apt. 556",
- "city": "Springfield",
- "zipcode": "92998-3874",
- "geo": {
- "lat": "-37.3159",
- "lng": "81.1496"
- }
- },
- "phone": "1-770-736-8031 x56442",
- "website": "hildegard.org",
- "company": {
- "name": "Acme-Corp",
- "catchPhrase": "Multi-layered client-server neural-net",
- "bs": "harness real-time e-markets"
- }
- }'
- ```
-
-6. To view the `Data` resulting from the **Select Keys** operation, do one of the following:
-
- * If you attached a **Chat Output** component, open the **Playground** to see the result as a chat message.
- * Click **Inspect output** on the **Data Operations** component.
-
-### Data Operations parameters
-
-Many parameters are conditional based on the selected **Operation** (`operation`).
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| data | Data | Input parameter. The `Data` object to operate on. |
-| operation | Operation | Input parameter. The operation to perform on the data. See [Available data operations](#available-data-operations) |
-| select_keys_input | Select Keys | Input parameter. A list of keys to select from the data. |
-| filter_key | Filter Key | Input parameter. The key to filter by. |
-| operator | Comparison Operator | Input parameter. The operator to apply for comparing values. |
-| filter_values | Filter Values | Input parameter. A list of values to filter by. |
-| append_update_data | Append or Update | Input parameter. The data to append or update the existing data with. |
-| remove_keys_input | Remove Keys | Input parameter. A list of keys to remove from the data. |
-| rename_keys_input | Rename Keys | Input parameter. A list of keys to rename in the data. |
-
-#### Available data operations
-
-Options for the `operations` input parameter are as follows.
-All operations act on an incoming `Data` object.
-
-| Name | Required Inputs | Process |
-|-----------|----------------|-------------|
-| Select Keys | `select_keys_input` | Selects specific keys from the data. |
-| Literal Eval | None | Evaluates string values as Python literals. |
-| Combine | None | Combines multiple data objects into one. |
-| Filter Values | `filter_key`, `filter_values`, `operator` | Filters data based on key-value pair. |
-| Append or Update | `append_update_data` | Adds or updates key-value pairs. |
-| Remove Keys | `remove_keys_input` | Removes specified keys from the data. |
-| Rename Keys | `rename_keys_input` | Renames keys in the data. |
-
-## DataFrame Operations
-
-The **DataFrame Operations** component performs operations on [`DataFrame`](/data-types#dataframe) (table) rows and columns, including schema changes, record changes, sorting, and filtering.
-For all options, see [DataFrame Operations parameters](#dataframe-operations-parameters).
-
-The output is a new `DataFrame` containing the modified data after running the selected operation.
-
-### Use the DataFrame Operations component in a flow
-
-The following steps explain how to configure a **DataFrame Operations** component in a flow.
-You can follow along with an example or use your own flow.
-The only requirement is that the preceding component must create `DataFrame` output that you can pass to the **DataFrame Operations** component.
-
-1. Create a new flow or use an existing flow.
-
-
- Example: API response extraction flow
-
- The following example flow uses five components to extract `Data` from an API response, transform it to a `DataFrame`, and then perform further processing on the tabular data using a **DataFrame Operations** component.
- The sixth component, **Chat Output**, is optional in this example.
- It only serves as a convenient way for you to view the final output in the **Playground**, rather than inspecting the component logs.
-
- 
-
- If you want to use this example to test the **DataFrame Operations** component, do the following:
-
- 1. Create a flow with the following components:
-
- * **API Request**
- * **Language Model**
- * **Smart Function**
- * **Type Convert**
-
- 2. Configure the [**Smart Function** component](#smart-transform) and its dependencies:
-
- * **API Request**: Configure the [**API Request** component](/components-data#api-request) to get JSON data from an endpoint of your choice, and then connect the **API Response** output to the **Smart Function** component's **Data** input.
- * **Language Model**: Select your preferred provider and model, and then enter a valid API key.
- Change the output to **Language Model**, and then connect the `LanguageModel` output to the **Smart Function** component's **Language Model** input.
- * **Smart Function**: In the **Instructions** field, enter natural language instructions to extract data from the API response.
- Your instructions depend on the response content and desired outcome.
- For example, if the response contains a large `result` field, you might provide instructions like `explode the result field out into a Data object`.
-
- 3. Convert the **Smart Function** component's `Data` output to `DataFrame`:
-
- 1. Connect the **Filtered Data** output to the **Type Convert** component's **Data** input.
- 2. Set the **Type Convert** component's **Output Type** to **DataFrame**.
-
- Now the flow is ready for you to add the **DataFrame Operations** component.
-
-
-
-2. Add a **DataFrame Operations** component to the flow, and then connect `DataFrame` output from another component to the **DataFrame** input.
-
- All operations in the **DataFrame Operations** component require at least one `DataFrame` input from another component.
- If a component doesn't produce `DataFrame` output, you can use another component, such as the **Type Convert** component, to reformat the data before passing it to the **DataFrame Operations** component.
- Alternatively, you could consider using a component that is designed to process the original data type, such as the **Parser** or **Data Operations** components.
-
- If you are following along with the example flow, connect the **Type Convert** component's **DataFrame Output** port to the **DataFrame** input.
-
-3. In the **Operations** field, select the operation you want to perform on the incoming `DataFrame`.
-For example, the **Filter** operation filters the rows based on a specified column and value.
-
- :::tip
- You can select only one operation.
- If you need to perform multiple operations on the data, you can chain multiple **DataFrame Operations** components together to execute each operation in sequence.
- For more complex multi-step operations, like dramatic schema changes or pivots, consider using an LLM-powered component, like the **Structured Output** or **Smart Function** component, as a replacement or preparation for the **DataFrame Operations** component.
- :::
-
- If you're following along with the example flow, select any operation that you want to apply to the data that was extracted by the **Smart Function** component.
- To view the contents of the incoming `DataFrame`, click **Run component** on the **Type Convert** component, and then **Inspect output**.
- If the `DataFrame` seems malformed, click **Inspect output** on each upstream component to determine where the error occurs, and then modify your flow's configuration as needed.
- For example, if the **Smart Function** component didn't extract the expected fields, modify your instructions or verify that the given fields are present in the **API Response** output.
-
-4. Configure the operation's parameters.
-The specific parameters depend on the selected operation.
-For example, if you select the **Filter** operation, you must define a filter condition using the **Column Name**, **Filter Value**, and **Filter Operator** parameters.
-For more information, see [DataFrame Operations parameters](#dataframe-operations-parameters)
-
-5. To test the flow, click **Run component** on the **DataFrame Operations** component, and then click **Inspect output** to view the new `DataFrame` created from the **Filter** operation.
-
- If you want to view the output in the **Playground**, connect the **DataFrame Operations** component's output to a **Chat Output** component, rerun the **DataFrame Operations** component, and then click **Playground**.
-
-For another example, see [Conditional looping](/components-logic#conditional-looping).
-
-### DataFrame Operations parameters
-
-Most **DataFrame Operations** parameters are conditional because they only apply to specific operations.
-
-The only permanent parameters are **DataFrame** (`df`), which is the `DataFrame` input, and **Operation** (`operation`), which is the operation to perform on the `DataFrame`.
-Once you select an operation, the conditional parameters for that operation appear on the **DataFrame Operations** component.
-
-
-
-
-The **Add Column** operation allows you to add a new column to the `DataFrame` with a constant value.
-
-The parameters are **New Column Name** (`new_column_name`) and **New Column Value** (`new_column_value`).
-
-
-
-
-The **Drop Column** operation allows you to remove a column from the `DataFrame`, specified by **Column Name** (`column_name`).
-
-
-
-
-The **Filter** operation allows you to filter the `DataFrame` based on a specified condition.
-The output is a `DataFrame` containing only the rows that matched the filter condition.
-
-Provide the following parameters:
-
-* **Column Name** (`column_name`): The name of the column to filter on.
-* **Filter Value** (`filter_value`): The value to filter on.
-* **Filter Operator** (`filter_operator`): The operator to use for filtering, one of `equals` (default), `not equals`, `contains`, `starts with`, `ends with`, `greater than`, or `less than`.
-
-
-
-
-The **Head** operation allows you to retrieve the first `n` rows of the `DataFrame`, where `n` is set in **Number of Rows** (`num_rows`).
-The default is `5`.
-
-The output is a `DataFrame` containing only the selected rows.
-
-
-
-
-The **Rename Column** operation allows you to rename an existing column in the `DataFrame`.
-
-The parameters are **Column Name** (`column_name`), which is the current name, and **New Column Name** (`new_column_name`).
-
-
-
-
-The **Replace Value** operation allows you to replace values in a specific column of the `DataFrame`.
-This operation replaces a target value with a new value.
-All cells matching the target value are replaced with the new value in the new `DataFrame` output.
-
-Provide the following parameters:
-
-* **Column Name** (`column_name`): The name of the column to modify.
-* **Value to Replace** (`replace_value`): The value that you want to replace.
-* **Replacement Value** (`replacement_value`): The new value to use.
-
-
-
-
-The **Select Columns** operation allows you to select one or more specific columns from the `DataFrame`.
-
-Provide a list of column names in **Columns to Select** (`columns_to_select`).
-In the visual editor, click **Add More** to add multiple fields, and then enter one column name in each field.
-
-The output is a `DataFrame` containing only the specified columns.
-
-
-
-
-The **Sort** operation allows you to sort the `DataFrame` on a specific column in ascending or descending order.
-
-Provide the following parameters:
-
-* **Column Name** (`column_name`): The name of the column to sort on.
-* **Sort Ascending** (`ascending`): Whether to sort in ascending or descending order. If enabled (`true`), sorts in ascending order; if disabled (`false`), sorts in descending order. Default: Enabled (`true`)
-
-
-
-
-The **Tail** operation allows you to retrieve the last `n` rows of the `DataFrame`, where `n` is set in **Number of Rows** (`num_rows`).
-The default is `5`.
-
-The output is a `DataFrame` containing only the selected rows.
-
-
-
-
-The **Drop Duplicates** operation removes rows from the `DataFrame` by identifying all duplicate values within a single column.
-
-The only parameter is the **Column Name** (`column_name`).
-
-When the flow runs, all rows with duplicate values in the given column are removed.
-The output is a `DataFrame` containing all columns from the original `DataFrame`, but only rows with non-duplicate values.
-
-
-
-
-## LLM Router
-
-The **LLM Router** component routes requests to the most appropriate LLM based on [OpenRouter](https://openrouter.ai/docs/quickstart) model specifications.
-
-To use the component in a flow, you connect multiple language model components to the **LLM Router** components.
-One model is the judge LLM that analyzes input messages to understand the evaluation context, selects the most appropriate model from the other attached LLMs, and then routes the input to the selected model.
-The selected model processes the input, and then returns the generated response.
-
-The following example flow has three language model components.
-One is the judge LLM, and the other two are in the LLM pool for request routing.
-The input and output components create a seamless chat interaction where you send a message and receive a response without any user awareness of the underlying routing.
-
-
-
-### LLM Router parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| `models` | **Language Models** | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from multiple [language model components](/components-models) to create a pool of models. The `judge_llm` selects models from this pool when routing requests. The first model you connect is the default model if there is a problem with model selection or routing. |
-| `input_value` | **Input** | Input parameter. The incoming query to be routed to the model selected by the judge LLM. |
-| `judge_llm` | **Judge LLM** | Input parameter. Connect `LanguageModel` output from _one_ **Language Model** component to serve as the judge LLM for request routing. |
-| `optimization` | **Optimization** | Input parameter. Set a preferred characteristic for model selection by the judge LLM. The options are `quality` (highest response quality), `speed` (fastest response time), `cost` (most cost-effective model), or `balanced` (equal weight for quality, speed, and cost). Default: `balanced` |
-| `use_openrouter_specs` | **Use OpenRouter Specs** | Input parameter. Whether to fetch model specifications from the OpenRouter API.
-If `false`, only the model name is provided to the judge LLM. Default: Enabled (`true`) |
-| `timeout` | **API Timeout** | Input parameter. Set a timeout duration in seconds for API requests made by the router. Default: `10` |
-| `fallback_to_first` | **Fallback to First Model** | Input parameter. Whether to use the first LLM in `models` as a backup if routing fails to reach the selected model. Default: Enabled (`true`) |
-
-### LLM Router outputs
-
-The **LLM Router** component provides three output options.
-You can set the desired output type near the component's output port.
-
-* **Output**: A `Message` containing the response to the original query as generated by the selected LLM.
-Use this output for regular chat interactions.
-
-* **Selected Model Info**: A `Data` object containing information about the selected model, such as its name and version.
-
-* **Routing Decision**: A `Message` containing the judge model's reasoning for selecting a particular model, including input query length and number of models considered.
-For example:
-
- ```text
- Model Selection Decision:
- - Selected Model Index: 0
- - Selected Langflow Model Name: gpt-4o-mini
- - Selected API Model ID (if resolved): openai/gpt-4o-mini
- - Optimization Preference: cost
- - Input Query Length: 27 characters (~5 tokens)
- - Number of Models Considered: 2
- - Specifications Source: OpenRouter API
- ```
-
- This is useful for debugging if you feel the judge model isn't selecting the best model.
-
-## Parser {#parser}
-
-The **Parser** component extracts text from structured data (`DataFrame` or `Data`) using a template or direct stringification.
-The output is a `Message` containing the parsed text.
-
-This is a versatile component for data extraction and manipulation in your flows.
-For examples of **Parser** components in flows, see the following:
-
-* [**Batch Run** component example](#batch-run)
-* [**Structured Output** component example](#structured-output)
-* **Financial Report Parser** template
-* [Trigger flows with webhooks](/webhook)
-* [Create a vector RAG chatbot](/chat-with-rag)
-
-
-
-### Parsing modes
-
-The **Parser** component has two modes: **Parser** and **Stringify**.
-
-
-
-
-In **Parser** mode, you create a template for text output that can include literal strings and variables for extracted keys.
-
-Use curly braces to define variables anywhere in the template.
-Variables must match keys in the `DataFrame` or `Data` input, such as column names.
-For example, `{name}` extracts the value of a `name` key.
-For more information about the content and structure of `DataFrame` and `Data` objects, see [Langflow data types](/data-types).
-
-
-
-When the flow runs, the **Parser** component iterates over the input, producing a `Message` for each parsed item.
-For example, parsing a `DataFrame` creates a `Message` for each row, populated with the unique values from that row.
-
-
-Employee summary template
-
-This example template extracts employee data into a natural language summary about an employee's hire date and current role:
-
-```text
-{employee_first_name} {employee_last_name} was hired on {start_date}.
-Their current position is {job_title} ({grade}).
-```
-
-The resulting `Message` output replaces the variables with the corresponding extracted values.
-For example:
-
-```text
-Renlo Kai was hired on 11-July-2017.
-Their current position is Software Engineer (Principal).
-```
-
-
-
-
-Employee profile template
-
-This example template uses Markdown syntax and extracted employee data to create an employee profile:
-
-```text
-# Employee Profile
-## Personal Information
-- **Name:** {name}
-- **ID:** {id}
-- **Email:** {email}
-```
-
-When the flow runs, the **Parser** component iterates over each row of the `DataFrame`, populating the template's variables with the appropriate extracted values.
-The resulting text for each row is output as a [`Message`](/data-types#message).
-
-
-
-The following parameters are available in **Parser** mode.
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_data | Data or DataFrame | Input parameter. The `Data` or `DataFrame` input to parse. |
-| pattern | Template | Input parameter. The formatting template using plaintext and variables for keys (`{KEY_NAME}`). See the preceding examples for more information. |
-| sep | Separator | Input parameter. A string defining the separator for rows or lines. Default: `\n` (new line). |
-| clean_data | Clean Data | Whether to remove empty rows and lines in each cell or key of the `DataFrame` or `Data` input. Default: Enabled (`true`) |
-
-
-
-
-Use **Stringify** mode to convert the entire input directly to text.
-This mode doesn't support templates or key selection.
-
-The following parameters are available in **Stringify** mode.
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_data | Data or DataFrame | Input parameter. The `Data` or `DataFrame` input to parse. |
-| sep | Separator | Input parameter. A string defining the separator for rows or lines. Default: `\n` (new line). |
-| clean_data | Clean Data | Whether to remove empty rows and lines in each cell or key of the `DataFrame` or `Data` input. Default: Enabled (`true`) |
-
-
-
-
-### Test and troubleshoot parsed text
-
-To test the **Parser** component, click **Run component**, and then click **Inspect output** to see the `Message` output with the parsed text.
-You can also connect a **Chat Output** component if you want to view the output in the **Playground**.
-
-If the `Message` output from the **Parser** component has empty or unexpected values, there might be a mapping error between the input and the parsing mode, the input has empty values, or the input isn't suitable for plaintext extraction.
-
-For example, assume you use the following template to parse a `DataFrame`:
-
-```text
-{employee_first_name} {employee_last_name} is a {job_title} ({grade}).
-```
-
-The following `Message` could result from parsing a row where `employee_first_name` was empty and `grade` was `null`:
-
-```text
- Smith is a Software Engineer (null).
-```
-
-To troubleshoot missing or unexpected values, you can do the following:
-
-* Make sure the variables in your template map to keys in the incoming `Data` or `DataFrame`.
-To see the data being passed directly to the **Parser** component, click **Inspect output** on the component that is sending data to the **Parser** component.
-
-* Check the source data for missing or incorrect values.
-There are several ways you can address these inconsistencies:
-
- * Rectify the source data directly.
- * Use other components to amend or filter anomalies before passing the data to the **Parser** component.
- There are many components you can use for this depending on your goal, such as the **Data Operations**, **Structured Output**, and **Smart Function** components.
- * Enable the **Parser** component's **Clean Data** parameter to skip empty rows or lines.
-
-## Python Interpreter
-
-This component allows you to execute Python code with imported packages.
-
-The **Python Interpreter** component can only import packages that are already installed in your Langflow environment.
-If you encounter an `ImportError` when trying to use a package, you need to install it first.
-
-To install custom packages, see [Install custom dependencies](/install-custom-dependencies).
-
-### Use the Python Interpreter in a flow
-
-1. To use this component in a flow, in the **Global Imports** field, add the packages you want to import as a comma-separated list, such as `math,pandas`.
-At least one import is required.
-2. In the **Python Code** field, enter the Python code you want to execute. Use `print()` to see the output.
-3. Optional: Enable **Tool Mode**, and then connect the **Python Interpreter** component to an **Agent** component as a tool.
-For example, connect a **Python Interpreter** component and a [**Calculator** component](/components-helpers#calculator) as tools for an **Agent** component, and then test how it chooses different tools to solve math problems.
-
-4. Ask the agent an easier math question.
-The **Calculator** tool can add, subtract, multiple, divide, or perform exponentiation.
-The agent executes the `evaluate_expression` tool to correctly answer the question.
-
-Result:
-```text
-Executed evaluate_expression
-Input:
-{
- "expression": "2+5"
-}
-Output:
-{
- "result": "7"
-}
-```
-
-5. Give the agent complete Python code.
-This example creates a Pandas DataFrame table with the imported `pandas` packages, and returns the square root of the mean squares.
-
-```python
-import pandas as pd
-import math
-
-# Create a simple DataFrame
-df = pd.DataFrame({
- 'numbers': [1, 2, 3, 4, 5],
- 'squares': [x**2 for x in range(1, 6)]
-})
-
-# Calculate the square root of the mean
-result = math.sqrt(df['squares'].mean())
-print(f"Square root of mean squares: {result}")
-```
-
-The agent correctly chooses the `run_python_repl` tool to solve the problem.
-
-Result:
-```text
-Executed run_python_repl
-
-Input:
-
-{
- "python_code": "import pandas as pd\nimport math\n\n# Create a simple DataFrame\ndf = pd.DataFrame({\n 'numbers': [1, 2, 3, 4, 5],\n 'squares': [x**2 for x in range(1, 6)]\n})\n\n# Calculate the square root of the mean\nresult = math.sqrt(df['squares'].mean())\nprint(f\"Square root of mean squares: {result}\")"
-}
-Output:
-
-{
- "result": "Square root of mean squares: 3.3166247903554"
-}
-```
-
-If you don't include the package imports in the chat, the agent can still create the table using `pd.DataFrame`, because the `pandas` package is imported globally by the **Python Interpreter** component in the **Global Imports** field.
-
-### Python Interpreter parameters
-
-| Name | Type | Description |
-|------|------|-------------|
-| global_imports | String | Input parameter. A comma-separated list of modules to import globally, such as `math,pandas,numpy`. |
-| python_code | Code | Input parameter. The Python code to execute. Only modules specified in Global Imports can be used. |
-| results | Data | Output parameter. The output of the executed Python code, including any printed results or errors. |
-
-## Save File
-
-The **Save File** component creates a file containing data produced by another component.
-Several file formats are supported, and you can store files in [Langflow storage](/memory) or the local file system.
-
-To configure the **Save File** component and use it in a flow, do the following:
-
-1. Connect [`DataFrame`](/data-types#dataframe), [`Data`](/data-types#data), or [`Message`](/data-types#message) output from another component to the **Save File** component's **Input** port.
-
- You can connect the same output to multiple **Save File** components if you want to create multiple files, save the data in different file formats, or save files to multiple locations.
-
-2. In **File Name**, enter a file name and an optional path.
-
- The **File Name** parameter controls where the file is saved.
- It can contain a file name or an entire file path:
-
- * **Default location**: If you only provide a file name, then the file is stored in the Langflow data directory. For example,`~/Library/Caches/langflow/data` on macOS.
-
- * **Subdirectory**: To store files in subdirectories, add the path to the **File Name** parameter.
- If a given subdirectory doesn't already exist, Langflow automatically creates it.
- For example, `files/my_file` creates `my_file` in `/data/files`, and it creates the `files` subdirectory if it doesn't already exist.
-
- * **Absolute or relative path**: To store files elsewhere in your environment or local file storage, provide the absolute or relative path to the desired location.
- For example, `~/Desktop/my_file` saves `my_file` to the desktop.
-
- Don't include an extension in the file name.
- If you do, the extension is treated as part of the file name; it has no impact on the **File Format** parameter.
-
-3. In the [component's header menu](/concepts-components#component-menus), click **Controls**, select the desired file format, and then click **Close**.
-
- The available **File Format** options depend on the input data type:
-
- * `DataFrame` can be saved to CSV (default), Excel (requires `openpyxl` [custom dependency](/install-custom-dependencies)), JSON (fallback default), or Markdown.
-
- * `Data` can be saved to CSV, Excel (requires `openpyxl` [custom dependency](/install-custom-dependencies)), JSON (default), or Markdown.
-
- * `Message` can be saved to TXT, JSON (default), or Markdown.
-
- :::warning Overwrites allowed
- If you have multiple **Save File** components, in one or more flows, with the same file name, path, and extension, the file contains the data from the most recent run only.
- Langflow doesn't block overwrites if a matching file already exists.
- To avoid unintended overwrites, use unique file names and paths.
- :::
-
-4. To test the **Save File** component, click **Run component**, and then click **Inspect output** to get the filepath where the file was saved.
-
- The component's literal output is a `Message` containing the original data type, the file name and extension, and the absolute filepath to the file based on the **File Name** parameter.
- For example:
-
- ```text
- DataFrame saved successfully as 'my_file.csv' at /Users/user.name/Library/Caches/langflow/data/my_file.csv
- ```
-
- If the **File Name** contains a subdirectory or other non-default path, this is reflected in the `Message` output.
- For example, a CSV file with the file name `~/Desktop/my_file` could produce the following output:
-
- ```text
- DataFrame saved successfully as '/Users/user.name/Desktop/my_file.csv' at /Users/user.name/Desktop/my_file.csv
- ```
-
-
-5. Optional: If you want to use the saved file in a flow, you must use an API call or another component to retrieve the file from the given filepath.
-
-## Smart Function {#smart-transform}
-
-In Langflow version 1.5, this component was renamed from **Lambda Filter** to **Smart Function**.
-
-The **Smart Function** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
-You must connect this component to a [language model component](/components-models), which is used to generate a function based on the natural language instructions you provide in the **Instructions** parameter.
-The LLM runs the function against the data input, and then outputs the results as [`Data`](/data-types#data).
-
-:::tip
-Provide brief, clear instructions, focusing on the desired outcome or specific actions, such as `Filter the data to only include items where the 'status' is 'active'`.
-One sentence or less is preferred because end punctuation, like periods, can cause errors or unexpected behavior.
-
-If you need to provide more details instructions that aren't directly relevant to the Lambda function, you can input them in the **Language Model** component's **Input** field or through a **Prompt Template** component.
-:::
-
-The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Function** component.
-Then, the **Smart Function** component passes the data and the instruction `extract emails` to the attached **Language Model** component.
-From there, the LLM generates a filter function that extracts email addresses from the JSON data, returning the filtered data as chat output.
-
-
-
-### Smart Function parameters
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| data | Data | Input parameter. The structured data to filter or transform using a Lambda function. |
-| llm | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
-| filter_instruction | Instructions | Input parameter. The natural language instructions for how to filter or transform the data. The LLM uses these instructions to create a Lambda function. |
-| sample_size | Sample Size | Input parameter. For large datasets, the number of characters to sample from the dataset head and tail. Only applied if the dataset meets or exceeds `max_size`. Default: `1000`. |
-| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |
-
-## Split Text
-
-The **Split Text** component splits data into chunks based on parameters like chunk size and separator.
-It is often used to chunk data to be tokenized and embedded into vector databases.
-For examples, see [Use embedding model components in a flow](/components-embedding-models#use-embedding-model-components-in-a-flow) and [Create a Vector RAG chatbot](/chat-with-rag).
-
-
-
-The component accepts `Message`, `Data`, or `DataFrame`, and then outputs either **Chunks** or **DataFrame**.
-The **Chunks** output returns a list of [`Data`](/data-types#data) objects containing individual text chunks.
-The **DataFrame** output returns the list of chunks as a structured [`DataFrame`](/data-types#dataframe) with additional `text` and `metadata` columns.
-
-### Split Text parameters
-
-The **Split Text** component's parameters control how the text is split into chunks, specifically the `chunk_size`, `chunk_overlap`, and `separator` parameters.
-
-To test the chunking behavior, add a **Text Input** or **File** component with some sample data to chunk, click **Run component** on the **Split Text** component, and then click **Inspect output** to view the list of chunks and their metadata. The **text** column contains the actual text chunks created from your chunking settings.
-If the chunks aren't split as you expect, adjust the parameters, rerun the component, and then inspect the new output.
-
-
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| data_inputs | Input | Input parameter. The data to split. Input must be in `Message`, `Data`, or `DataFrame` format. |
-| chunk_overlap | Chunk Overlap | Input parameter. The number of characters to overlap between chunks. This helps maintain context across chunks. When a separator is encountered, the overlap is applied at the point of the separator so that the subsequent chunk contains the last _n_ characters of the preceding chunk. Default: `200`. |
-| chunk_size | Chunk Size | Input parameter. The target length for each chunk after splitting. The data is first split by separator, and then chunks smaller than the `chunk_size` are merged up to this limit. However, if the initial separator split produces any chunks larger than the `chunk_size`, those chunks are neither further subdivided nor combined with any smaller chunks; these chunks will be output as-is even though they exceed the `chunk_size`. Default: `1000`. See [Tokenization errors due to chunk size](#chunk-size) for important considerations. |
-| separator | Separator | Input parameter. A string defining a character to split on, such as `\n` to split on new line characters, `\n\n` to split at paragraph breaks, or `},` to split at the end of JSON objects. You can directly provide the separator string, or pass a separator string from another component as `Message` input. |
-| text_key | Text Key | Input parameter. The key to use for the text column that is extracted from the input and then split. Default: `text`. |
-| keep_separator | Keep Separator | Input parameter. Select how to handle separators in output chunks. If `False`, separators are omitted from output chunks. Options include `False` (remove separators), `True` (keep separators in chunks without preference for placement), `Start` (place separators at the beginning of chunks), or `End` (place separators at the end of chunks). Default: `False`. |
-
-### Tokenization errors due to chunk size {#chunk-size}
-
-When using **Split Text** with embedding models (especially NVIDIA models like `nvidia/nv-embed-v1`), you may need to use smaller chunk sizes (`500` or less) even though the model supports larger token limits.
-The **Split Text** component doesn't always enforce the exact chunk size you set, and individual chunks may exceed your specified limit.
-If you encounter tokenization errors, modify your text splitting strategy by reducing the chunk size, changing the overlap length, or using a more common separator.
-Then, test your configuration by running the flow and inspecting the component's output.
-
-### Other text splitters
-
-See [LangChain text splitter components](/bundles-langchain#text-splitters).
-
-## Structured Output
-
-The **Structured Output** component uses an LLM to transform any input into structured data (`Data` or `DataFrame`) using natural language formatting instructions and an output schema definition.
-For example, you can extract specific details from documents, like email messages or scientific papers.
-
-### Use the Structured Output component in a flow
-
-To use the **Structured Output** component in a flow, do the following:
-
-1. Provide an **Input Message**, which is the source material from which you want to extract structured data.
-This can come from practically any component, but it is typically a **Chat Input**, **File**, or other component that provides some unstructured or semi-structured input.
-
- :::tip
- Not all source material has to become structured output.
- The power of the **Structured Output** component is that you can specify the information you want to extract, even if that data isn't explicitly labeled or an exact keyword match.
- Then, the LLM can use your instructions to analyze the source material, extract the relevant data, and format it according to your specifications.
- Any irrelevant source material isn't included in the structured output.
- :::
-
-2. Define **Format Instructions** and an **Output Schema** to specify the data to extract from the source material and how to structure it in the final `Data` or `DataFrame` output.
-
- The instructions are a prompt that tell the LLM what data to extract, how to format it, how to handle exceptions, and any other instructions relevant to preparing the structured data.
-
- The schema is a table that defines the fields (keys) and data types to organize the data extracted by the LLM into a structured `Data` or `DataFrame` object.
- For more information, see [Output Schema options](#output-schema-options)
-
-3. Attach a [language model component](/components-models) that is set to emit [`LanguageModel`](/data-types#languagemodel) output.
-
- The LLM uses the **Input Message** and **Format Instructions** from the **Structured Output** component to extract specific pieces of data from the input text.
- The output schema is applied to the model's response to produce the final `Data` or `DataFrame` structured object.
-
-4. Optional: Typically, the structured output is passed to downstream components that use the extracted data for other processes, such as the **Parser** or **Data Operations** components.
-
-
-
-
-Structured Output example: Financial Report Parser template
-
-The **Financial Report Parser** template provides an example of how the **Structured Output** component can be used to extract structured data from unstructured text.
-
-The template's **Structured Output** component has the following configuration:
-
-* The **Input Message** comes from a **Chat Input** component that is preloaded with quotes from sample financial reports
-
-* The **Format Instructions** are as follows:
-
- ```text
- You are an AI that extracts structured JSON objects from unstructured text.
- Use a predefined schema with expected types (str, int, float, bool, dict).
- Extract ALL relevant instances that match the schema - if multiple patterns exist, capture them all.
- Fill missing or ambiguous values with defaults: null for missing values.
- Remove exact duplicates but keep variations that have different field values.
- Always return valid JSON in the expected format, never throw errors.
- If multiple objects can be extracted, return them all in the structured format.
- ```
-
-* The **Output Schema** includes keys for `EBITDA`, `NET_INCOME`, and `GROSS_PROFIT`.
-
-The structured `Data` object is passed to a **Parser** component that produces a text string by mapping the schema keys to variables in the parsing template:
-
-```text
-EBITDA: {EBITDA} , Net Income: {NET_INCOME} , GROSS_PROFIT: {GROSS_PROFIT}
-```
-
-When printed to the **Playground**, the resulting `Message` replaces the variables with the actual values extracted by the **Structured Output** component. For example:
-
-```text
-EBITDA: 900 million , Net Income: 500 million , GROSS_PROFIT: 1.2 billion
-```
-
-
-
-### Structured Output parameters
-
-
-
-| Name | Type | Description |
-|------|------|-------------|
-| Language Model (`llm`) | `LanguageModel` | Input parameter. The [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component that defines the LLM to use to analyze, extract, and prepare the structured output. |
-| Input Message (`input_value`) | String | Input parameter. The input message containing source material for extraction. |
-| Format Instructions (`system_prompt`) | String | Input parameter. The instructions to the language model for extracting and formatting the output. |
-| Schema Name (`schema_name`) | String | Input parameter. An optional title for the **Output Schema**. |
-| Output Schema (`output_schema`)| Table | Input parameter. A table describing the schema of the desired structured output, ultimately determining the content of the `Data` or `DataFrame` output. See [Output Schema options](#output-schema-options). |
-| Structured Output (`structured_output`) | `Data` or `DataFrame` | Output parameter. The final structured output produced by the component. Near the component's output port, you can select the output data type as either **Structured Output Data** or **Structured Output DataFrame**. The specific content and structure of the output depends on the input parameters. |
-
-#### Output Schema options {#output-schema-options}
-
-After the LLM extracts the relevant data from the **Input Message** and **Format Instructions**, the data is organized according to the **Output Schema**.
-
-The schema is a table that defines the fields (keys) and data types for the final `Data` or `DataFrame` output from the **Structured Output** component.
-
-The default schema is a single `field` string.
-
-To add a key to the schema, click **Add a new row**, and then edit each column to define the schema:
-
-* **Name**: The name of the output field. Typically a specific key for which you want to extract a value.
-
- You can reference these keys as variables in downstream components, such as a **Parser** component's template.
- For example, the schema key `NET_INCOME` could be referenced by the variable `{NET_INCOME}`.
-
-* **Description**: An optional metadata description of the field's contents and purpose.
-
-* **Type**: The data type of the value stored in the field.
-Supported types are `str` (default), `int`, `float`, `bool`, and `dict`.
-
-* **As List**: Enable this setting if you want the field to contain a list of values rather than a single value.
-
-For simple schemas, you might only extract a few `string` or `int` fields.
-For more complex schemas with lists and dictionaries, it might help to refer to the `Data` and `DataFrame` structures and attributes, as described in [Langflow data types](/data-types).
-You can also emit a rough `Data` or `DataFrame`, and then use downstream components for further refinement, such as a **Data Operations** component.
-
-## Type Convert
-
-The **Type Convert** component converts data from one type to another.
-It supports `Data`, `DataFrame`, and `Message` data types.
-
-
-
-
-A `Data` object is a structured object that contains a primary `text` key and other key-value pairs:
-
-```json
-"data": {
- "text": "User Profile",
- "name": "Charlie Lastname",
- "age": 28,
- "email": "charlie.lastname@example.com"
-},
-```
-
-The larger context associated with a component's `data` dictionary also identifies which key is the primary `text_key`, and it can provide an optional default value if the primary key isn't specified.
-For example:
-
-```json
-{
- "text_key": "text",
- "data": {
- "text": "User Profile",
- "name": "Charlie Lastname",
- "age": 28,
- "email": "charlie.lastname@example.com"
- },
- "default_value": ""
-}
-```
-
-
-
-
-A `DataFrame` is an array that represents a tabular data structure with rows and columns.
-
-It consists of a list (array) of dictionary objects, where each dictionary represents a row.
-Each key in the dictionaries corresponds to a column name.
-For example, the following `DataFrame` contains two rows with columns for `name`, `age`, and `email`:
-
-```json
-[
- {
- "name": "Charlie Lastname",
- "age": 28,
- "email": "charlie.lastname@example.com"
- },
- {
- "name": "Bobby Othername",
- "age": 25,
- "email": "bobby.othername@example.com"
- }
-]
-```
-
-
-
-
-A `Message` is primarily for passing a `text` string, such as`"Name: Charlie Lastname, Age: 28, Email: charlie.lastname@example.com"`.
-However, the entire `Message` object can include metadata about the message, particularly when used as chat input or output.
-
-
-
-
-For more information, see [Langflow data types](/data-types).
-
-### Use the Type Convert component in a flow
-
-The **Type Convert** component is typically used to transform data into a format required by a downstream component.
-For example, if a component outputs a `Message`, but the following component requires `Data`, then you can use the **Type Convert** component to reformat the `Message` as `Data` before passing it to the downstream component.
-
-The following example uses the **Type Convert** component to convert the `DataFrame` output from a **Web Search** component into `Message` data that is passed as text input for an LLM:
-
-1. Create a flow based on the **Basic prompting** template.
-
-2. Add a **Web Search** component to the flow, and then enter a search query, such as `environmental news`.
-
-3. In the **Prompt Template** component, replace the contents of the **Template** field with the following text:
-
- ```text
- Answer the user's question using the {context}
- ```
-
- The curly braces define a [prompt variable](/components-prompts#define-variables-in-prompts) that becomes an input field on the **Prompt Template** component.
- In this example, you will use the **context** field to pass the search results into the template, as explained in the next steps.
-
-3. Add a **Type Convert** component to the flow, and then set the **Output Type** to **Message**.
-
- Because the **Web Search** component's `DataFrame` output is incompatible with the **context** variable's `Message` input, you must use the **Type Convert** component to change the `DataFrame` to a `Message` in order to pass the search results to the **Prompt Template** component.
-
-4. Connect the additional components to the rest of the flow:
-
- * Connect the **Web Search** component's output to the **Type Convert** component's input.
- * Connect the **Type Convert** component's output to the **Prompt Template** component's **context** input.
-
- 
-
-5. In the **Language Model** component, add your OpenAI API key.
-
- If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
-
-6. Click **Playground**, and then ask something relevant to your search query, such as `latest news` or `what's the latest research on the environment?`.
-
-
- Result
-
- The LLM uses the search results context, your chat message, and it's built-in training data to respond to your question.
- For example:
-
- ```text
- Here are some of the latest news articles related to the environment:
- Ozone Pollution and Global Warming: A recent study highlights that ozone pollution is a significant global environmental concern, threatening human health and crop production while exacerbating global warming. Read more
- ...
- ```
-
-
-
-### Type Convert parameters
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_data | Input Data | Input parameter. The data to convert. Accepts `Data`, `DataFrame`, or `Message` input. |
-| output_type | Output Type | Input parameter. The desired output type, as one of **Data**, **DataFrame** or **Message**. |
-| output | Output | Output parameter. The converted data in the specified format. The output port changes depending on the selected **Output Type**. |
-
-## Legacy Processing components
-
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
-
-
-
-The following Processing components are in legacy status:
-
-
-Alter Metadata
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component modifies metadata of input objects. It can add new metadata, update existing metadata, and remove specified metadata fields. The component works with both `Message` and `Data` objects, and can also create a new `Data` object from user-provided text.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_value | Input | Input parameter. Objects to which Metadata should be added. |
-| text_in | User Text | Input parameter. Text input; the value is contained in the 'text' attribute of the `Data` object. Empty text entries are ignored. |
-| metadata | Metadata | Input parameter. Metadata to add to each object. |
-| remove_fields | Fields to Remove | Input parameter. Metadata fields to remove. |
-| data | Data | Output parameter. List of Input objects, each with added metadata. |
-
-
-
-
-Combine Data
-
-Replace this legacy component with the [**Data Operations** component](#data-operations) or the [**Loop** component](/components-logic#loop).
-
-This component combines multiple data sources into a single unified `Data` object.
-
-The component iterates through a list of `Data` objects, merging them into a single `Data` object (`merged_data`).
-If the input list is empty, it returns an empty data object.
-If there's only one input data object, it returns that object unchanged.
-
-The merging process uses the addition operator to combine data objects.
-
-
-
-
-Combine Text
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component concatenates two text inputs into a single text chunk using a specified delimiter, outputting a `Message` object with the combined text.
-
-
-
-
-Create Data
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component dynamically creates a `Data` object with a specified number of fields and a text key.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| number_of_fields | Number of Fields | Input parameter. The number of fields to be added to the record. |
-| text_key | Text Key | Input parameter. Key that identifies the field to be used as the text content. |
-| text_key_validator | Text Key Validator | Input parameter. If enabled, checks if the given `Text Key` is present in the given `Data`. |
-
-
-
-
-Extract Key
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component extracts a specific key from a `Data` object and returns the value associated with that key.
-
-
-
-
-Data to DataFrame/Data to Message
-
-Replace these legacy components with newer Processing components, such as the [**Data Operations** component](#data-operations) and [**Type Convert** component](#type-convert).
-
-These components converted one or more `Data` objects into a `DataFrame` or `Message` object.
-
-For the **Data to DataFrame** component, each `Data` object corresponds to one row in the resulting `DataFrame`.
-Fields from the `.data` attribute become columns, and the `.text` field (if present) is placed in a `text` column.
-
-
-
-
-Filter Data
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component filters a `Data` object based on a list of keys (`filter_criteria`), returning a new `Data` object (`filtered_data`) that contains only the key-value pairs that match the filter criteria.
-
-
-
-
-Filter Values
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-The Filter values component filters a list of data items based on a specified key, filter value, and comparison operator.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| input_data | Input data | Input parameter. The list of data items to filter. |
-| filter_key | Filter Key | Input parameter. The key to filter on. |
-| filter_value | Filter Value | Input parameter. The value to filter by. |
-| operator | Comparison Operator | Input parameter. The operator to apply for comparing the values. |
-| filtered_data | Filtered data | Output parameter. The resulting list of filtered data items. |
-
-
-
-
-JSON Cleaner
-
-Replace this legacy component with the [**Parser** component](#parser).
-
-This component cleans JSON strings to ensure they are fully compliant with the JSON specification.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| json_str | JSON String | Input parameter. The JSON string to be cleaned. This can be a raw, potentially malformed JSON string produced by language models or other sources that may not fully comply with JSON specifications. |
-| remove_control_chars | Remove Control Characters | Input parameter. If set to `True`, this option removes control characters (ASCII characters 0-31 and 127) from the JSON string. This can help eliminate invisible characters that might cause parsing issues or make the JSON invalid. |
-| normalize_unicode | Normalize Unicode | Input parameter. When enabled, this option normalizes Unicode characters in the JSON string to their canonical composition form (NFC). This ensures consistent representation of Unicode characters across different systems and prevents potential issues with character encoding. |
-| validate_json | Validate JSON | Input parameter. If set to `True`, this option attempts to parse the JSON string to ensure it is well-formed before applying the final repair operation. It raises a ValueError if the JSON is invalid, allowing for early detection of major structural issues in the JSON. |
-| output | Cleaned JSON String | Output parameter. The resulting cleaned, repaired, and validated JSON string that fully complies with the JSON specification. |
-
-
-
-
-Message to Data
-
-Replace this legacy component with the [**Type Convert** component](#type-convert).
-
-This component converts `Message` objects to `Data` objects.
-
-
-
-
-Parse DataFrame
-
-Replace this legacy component with the [**DataFrame Operations** component](#dataframe-operations) or [**Parser** component](#parser).
-
-This component converts `DataFrame` objects into plain text using templates.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| df | DataFrame | Input parameter. The DataFrame to convert to text rows. |
-| template | Template | Input parameter. Template for formatting (use `{column_name}` placeholders). |
-| sep | Separator | Input parameter. String to join rows in output. |
-| text | Text | Output parameter. All rows combined into single text. |
-
-
-
-
-Parse JSON
-
-Replace this legacy component with the [**Parser** component](#parser).
-
-This component converts and extracts JSON fields in `Message` and `Data` objects using JQ queries, then returns `filtered_data`, which is a list of `Data` objects.
-
-
-
-
-Regex Extractor
-
-Replace this legacy component with the [**Parser** component](#parser).
-
-This component extracts patterns in text using regular expressions. It can be used to find and extract specific patterns or information in text.
-
-
-
-
-Select Data
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component selects a single `Data` object from a list.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| data_list | Data List | Input parameter. List of data to select from |
-| data_index | Data Index | Input parameter. Index of the data to select |
-| selected_data | Selected Data | Output parameter. The selected `Data` object. |
-
-
-
-
-Update Data
-
-Replace this legacy component with the [**Data Operations** component](#data-operations).
-
-This component dynamically updates or appends data with specified fields.
-
-It accepts the following parameters:
-
-| Name | Display Name | Info |
-|------|--------------|------|
-| old_data | Data | Input parameter. The records to update. |
-| number_of_fields | Number of Fields | Input parameter. The number of fields to add. The maximum is 15. |
-| text_key | Text Key | Input parameter. The key for text content. |
-| text_key_validator | Text Key Validator | Input parameter. Validates the text key presence. |
-| data | Data | Output parameter. The updated Data objects. |
-
-
\ No newline at end of file
diff --git a/docs/docs/Components/components-prompts.mdx b/docs/docs/Components/components-prompts.mdx
index e8b67b034100..1974e85c524c 100644
--- a/docs/docs/Components/components-prompts.mdx
+++ b/docs/docs/Components/components-prompts.mdx
@@ -28,7 +28,7 @@ The **Prompt Template** component can also output variable instructions to other
Variables in a **Prompt Template** component dynamically add fields to the **Prompt Template** component so that your flow can receive definitions for those values from other components, Langflow global variables, or fixed input.
-For example, with the [**Message History** component](/components-helpers#message-history), you can use a `{memory}` variable to pass chat history to the prompt.
+For example, with the [**Message History** component](/message-history), you can use a `{memory}` variable to pass chat history to the prompt.
However, the **Agent** component includes built-in chat memory that is enabled by default.
For more information, see [Memory management options](/memory).
@@ -68,9 +68,9 @@ The following steps demonstrate how to add variables to a **Prompt Template** co
* Enter fixed values directly into the fields.
You can add as many variables as you like in your template.
-For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **File** components.
+For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **Read File** components.
## See also
* [**LangChain Prompt Hub** component](/bundles-langchain#prompt-hub)
-* [Processing components](/components-processing)
\ No newline at end of file
+* [Processing components](/concepts-components)
\ No newline at end of file
diff --git a/docs/docs/Components/components-tools.mdx b/docs/docs/Components/components-tools.mdx
deleted file mode 100644
index 039bc2040d07..000000000000
--- a/docs/docs/Components/components-tools.mdx
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Tools
-slug: /components-tools
----
-
-In Langflow version 1.5, the **Tools** category was deprecated.
-All components that were in this category were replaced by other components or moved to other component categories.
-
-## MCP Connection component
-
-This component was moved to the **Agents** category and renamed to the [**MCP Tools** component](/components-agents#mcp-connection)
-
-## Legacy Tools components
-
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
-
-
-
-The following Tools components are in legacy status:
-
-* **Calculator Tool**: Replaced by the [**Calculator** component](/components-helpers#calculator).
-* **Python Code Structured**: Replaced by the [**Python Interpreter** component](/components-processing#python-interpreter).
-* **Python REPL**: Replaced by the [**Python Interpreter** component](/components-processing#python-interpreter).
-* **Search API**: Replaced by the [**SearchApi** bundle](/bundles-searchapi).
-* **SearXNG Search**: No direct replacement. Use another provider's search component, create a custom component, or use a core component like the [**API Request** component](/components-data#api-request).
-* **Serp Search API**: Replace by the **SerpApi** bundle.
-* **Tavily Search API**: Replaced by the **Tavily** bundle.
-* **Wikidata API**: Replaced by the [**Wikipedia** bundle](/bundles-wikipedia).
-* **Wikipedia API**: Replaced by the [**Wikipedia** bundle](/bundles-wikipedia).
-* **Yahoo! Finance**: Replaced by the **Yahoo! Search** bundle.
-
-## See also
-
-* [**API Request** component](/components-data#api-request)
-* [**News Search** component](/components-data#news-search)
-* [**Web Search** component](/components-data#web-search)
-* [**Bing** bundle](/bundles-bing)
-* [**DuckDuckGo** bundle](/bundles-duckduckgo)
-* [**Exa** bundle](/bundles-exa)
-* [**Google** bundle](/bundles-google)
-* [**Serper** bundle](/bundles-serper)
\ No newline at end of file
diff --git a/docs/docs/Components/concepts-components.mdx b/docs/docs/Components/concepts-components.mdx
index 1509cfc51c45..92783b580eb6 100644
--- a/docs/docs/Components/concepts-components.mdx
+++ b/docs/docs/Components/concepts-components.mdx
@@ -98,7 +98,7 @@ For information about the programmatic representation of each data type, see [La
* In the workspace, hover over a port to see connection details for that port.
Click a port to **Search** for compatible components.
-* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/components-processing#type-convert) to convert the data between components.
+* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/type-convert) to convert the data between components.
:::
### Dynamic ports
@@ -120,7 +120,7 @@ Some components can produce multiple types of output:
For example, a language model component can output _either_ a **Model Response** or **Language Model**.
The **Model Response** output produces [`Message`](/data-types#message) data that can be passed to another component's `Message` port.
-The **Language Model** output must be connected to a component with a **Language Model** input, such as the [**Structured Output** component](/components-processing#structured-output), that uses the attached LLM to power the receiving component's reasoning.
+The **Language Model** output must be connected to a component with a **Language Model** input, such as the [**Structured Output** component](/structured-output), that uses the attached LLM to power the receiving component's reasoning.

@@ -155,7 +155,7 @@ In the context of creating and running flows, component code does the following:
* Passes results to the next component in the flow.
All components inherit from a base `Component` class that defines the component's interface and behavior.
-For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/base/textsplitters/model.py) class.
+For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/base/textsplitters/model.py) class.
Each component's code includes definitions for inputs and outputs, which are represented in the workspace as [component ports](#component-ports).
For example, the `RecursiveCharacterTextSplitter` has four inputs. Each input definition specifies the input type, such as `IntInput`, as well as the encoded name, display name, description, and other parameters for that specific input.
diff --git a/docs/docs/Components/current-date.mdx b/docs/docs/Components/current-date.mdx
new file mode 100644
index 000000000000..5ed96b585712
--- /dev/null
+++ b/docs/docs/Components/current-date.mdx
@@ -0,0 +1,18 @@
+---
+title: Current Date
+slug: /current-date
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The **Current Date** component returns the current date and time in a selected timezone. This component provides a flexible way to obtain timezone-specific date and time information within a Langflow pipeline.
+
+## Current Date parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| timezone | String | Input parameter. The timezone for the current date and time. |
+| current_date | String | Output parameter. The resulting current date and time in the selected timezone. |
+
diff --git a/docs/docs/Components/data-operations.mdx b/docs/docs/Components/data-operations.mdx
new file mode 100644
index 000000000000..f5d7886685e3
--- /dev/null
+++ b/docs/docs/Components/data-operations.mdx
@@ -0,0 +1,163 @@
+---
+title: Data Operations
+slug: /data-operations
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **Data Operations** component performs operations on [`Data`](/data-types#data) objects, including extracting, filtering, and editing keys and values in the `Data`.
+For all options, see [Available data operations](#available-data-operations).
+The output is a new `Data` object containing the modified data after running the selected operation.
+
+## Use the Data Operations component in a flow
+
+The following example demonstrates how to use a **Data Operations** component in a flow using data from a webhook payload:
+
+1. Create a flow with a **Webhook** component and a **Data Operations** component, and then connect the **Webhook** component's output to the **Data Operations** component's **Data** input.
+
+ All operations in the **Data Operations** component require at least one `Data` input from another component.
+ If the preceding component doesn't produce `Data` output, you can use another component, such as the [**Type Convert** component](/type-convert), to reformat the data before passing it to the **Data Operations** component.
+ Alternatively, you could consider using a component that is designed to process the original data type, such as the [**Parser** component](/parser) or [**DataFrame Operations** component](/dataframe-operations).
+
+2. In the **Operations** field, select the operation you want to perform on the incoming `Data`.
+For this example, select the **Select Keys** operation.
+
+ :::tip
+ You can select only one operation.
+ If you need to perform multiple operations on the data, you can chain multiple **Data Operations** components together to execute each operation in sequence.
+ For more complex multi-step operations, consider using a component like the [**Smart Transform** component](/smart-transform).
+ :::
+
+3. Under **Select Keys**, add keys for `name`, `username`, and `email`.
+Click **Add more** to add a field for each key.
+
+ For this example, assume that the webhook will receive consistent payloads that always contain `name`, `username`, and `email` keys.
+ The **Select Keys** operation extracts the value of these keys from each incoming payload.
+
+4. Optional: If you want to view the output in the **Playground**, connect the **Data Operations** component's output to a **Chat Output** component.
+
+ 
+
+5. To test the flow, send the following request to your flow's webhook endpoint.
+For more information about the webhook endpoint, see [Trigger flows with webhooks](/webhook).
+
+ ```bash
+ curl -X POST "http://$LANGFLOW_SERVER_URL/api/v1/webhook/$FLOW_ID" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -d '{
+ "id": 1,
+ "name": "Leanne Graham",
+ "username": "Bret",
+ "email": "Sincere@april.biz",
+ "address": {
+ "street": "Main Street",
+ "suite": "Apt. 556",
+ "city": "Springfield",
+ "zipcode": "92998-3874",
+ "geo": {
+ "lat": "-37.3159",
+ "lng": "81.1496"
+ }
+ },
+ "phone": "1-770-736-8031 x56442",
+ "website": "hildegard.org",
+ "company": {
+ "name": "Acme-Corp",
+ "catchPhrase": "Multi-layered client-server neural-net",
+ "bs": "harness real-time e-markets"
+ }
+ }'
+ ```
+
+6. To view the `Data` resulting from the **Select Keys** operation, do one of the following:
+
+ * If you attached a **Chat Output** component, open the **Playground** to see the result as a chat message.
+ * Click **Inspect output** on the **Data Operations** component.
+
+## Data Operations parameters
+
+Many parameters are conditional based on the selected **Operation** (`operation`).
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| data | Data | Input parameter. The `Data` object to operate on. |
+| operation | Operation | Input parameter. The operation to perform on the data. See [Available data operations](#available-data-operations) |
+| select_keys_input | Select Keys | Input parameter. A list of keys to select from the data. |
+| filter_key | Filter Key | Input parameter. The key to filter by. |
+| operator | Comparison Operator | Input parameter. The operator to apply for comparing values. |
+| filter_values | Filter Values | Input parameter. A list of values to filter by. |
+| append_update_data | Append or Update | Input parameter. The data to append or update the existing data with. |
+| remove_keys_input | Remove Keys | Input parameter. A list of keys to remove from the data. |
+| rename_keys_input | Rename Keys | Input parameter. A list of keys to rename in the data. |
+| mapped_json_display | JSON to Map | Input parameter. JSON structure to explore for path selection. Only applies to the **Path Selection** operation. For more information, see [Path Selection operation examples](#path-selection-operation-examples). |
+| selected_key | Select Path | Input parameter. The JSON path expression to extract values. Only applies to the **Path Selection** operation. For more information, see [Path Selection operation examples](#path-selection-operation-examples). |
+| query | JQ Expression | Input parameter. The [`jq`](https://jqlang.org/manual/) expression for advanced JSON filtering and transformation. Only applies to the **JQ Expression** operation. For more information, see [JQ Expression operation examples](#jq-expression-operation-examples). |
+
+#### Available data operations
+
+Options for the `operations` input parameter are as follows.
+All operations act on an incoming `Data` object.
+
+| Name | Required Inputs | Process |
+|-----------|----------------|-------------|
+| Select Keys | `select_keys_input` | Selects specific keys from the data. |
+| Literal Eval | None | Evaluates string values as Python literals. |
+| Combine | None | Combines multiple data objects into one. |
+| Filter Values | `filter_key`, `filter_values`, `operator` | Filters data based on key-value pair. |
+| Append or Update | `append_update_data` | Adds or updates key-value pairs. |
+| Remove Keys | `remove_keys_input` | Removes specified keys from the data. |
+| Rename Keys | `rename_keys_input` | Renames keys in the data. |
+| Path Selection | `mapped_json_display`, `selected_key` | Extracts values from nested JSON structures using path expressions. |
+| JQ Expression | `query` | Performs advanced JSON queries using [`jq`](https://jqlang.org/manual/) syntax for filtering, projections, and transformations. |
+
+## Path Selection operation examples
+
+Use the Path Selection operation to extract values from nested JSON structures with dot notation paths.
+
+1. In the **Operations** dropdown, select **Path Selection**.
+2. In the **JSON to Map** field, enter your JSON structure.
+
+ This example uses the following JSON structure.
+ ```json
+ {
+ "user": {
+ "profile": {
+ "name": "John Doe",
+ "email": "john@example.com"
+ },
+ "settings": {
+ "theme": "dark"
+ }
+ }
+ }
+ ```
+ The **Select Path** dropdown auto-populates with available paths.
+3. In the **Select Paths** dropdown, select the path.
+ You can select paths such as `.user.profile.name` to extract "John Doe", or select `.user.settings.theme` to extract "dark".
+
+## JQ Expression operation example {#jq-expression-operation-examples}
+
+Use the **JQ Expressions** operation to use the [jq](https://jqlang.org/) query language to perform more advanced JSON filtering.
+1. In the **Operations** dropdown, select **JQ Expression**.
+2. In the **JQ Expression** field, enter a `jq` filter to query against the **Data Operations** component's Data input.
+
+ For this example JSON structure, enter expressions like `.user.profile.name` to extract "John Doe", `.user.profile | {name, email}` to project fields to a new object, or `.user.profile | tostring` to convert the field to a string.
+ ```json
+ {
+ "user": {
+ "profile": {
+ "name": "John Doe",
+ "email": "john@example.com"
+ },
+ "settings": {
+ "theme": "dark"
+ }
+ }
+ }
+ ```
+
diff --git a/docs/docs/Components/dataframe-operations.mdx b/docs/docs/Components/dataframe-operations.mdx
new file mode 100644
index 000000000000..439bc34ca163
--- /dev/null
+++ b/docs/docs/Components/dataframe-operations.mdx
@@ -0,0 +1,193 @@
+---
+title: DataFrame Operations
+slug: /dataframe-operations
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **DataFrame Operations** component performs operations on [`DataFrame`](/data-types#dataframe) (table) rows and columns, including schema changes, record changes, sorting, and filtering.
+For all options, see [DataFrame Operations parameters](#dataframe-operations-parameters).
+
+The output is a new `DataFrame` containing the modified data after running the selected operation.
+
+## Use the DataFrame Operations component in a flow
+
+The following steps explain how to configure a **DataFrame Operations** component in a flow.
+You can follow along with an example or use your own flow.
+The only requirement is that the preceding component must create `DataFrame` output that you can pass to the **DataFrame Operations** component.
+
+1. Create a new flow or use an existing flow.
+
+
+ Example: API response extraction flow
+
+ The following example flow uses five components to extract `Data` from an API response, transform it to a `DataFrame`, and then perform further processing on the tabular data using a **DataFrame Operations** component.
+ The sixth component, **Chat Output**, is optional in this example.
+ It only serves as a convenient way for you to view the final output in the **Playground**, rather than inspecting the component logs.
+
+ 
+
+ If you want to use this example to test the **DataFrame Operations** component, do the following:
+
+ 1. Create a flow with the following components:
+
+ * **API Request**
+ * **Language Model**
+ * **Smart Transform**
+ * **Type Convert**
+
+ 2. Configure the [**Smart Transform** component](/smart-transform) and its dependencies:
+
+ * **API Request**: Configure the [**API Request** component](/api-request) to get JSON data from an endpoint of your choice, and then connect the **API Response** output to the **Smart Transform** component's **Data** input.
+ * **Language Model**: Select your preferred provider and model, and then enter a valid API key.
+ Change the output to **Language Model**, and then connect the `LanguageModel` output to the **Smart Transform** component's **Language Model** input.
+ * **Smart Transform**: In the **Instructions** field, enter natural language instructions to extract data from the API response.
+ Your instructions depend on the response content and desired outcome.
+ For example, if the response contains a large `result` field, you might provide instructions like `explode the result field out into a Data object`.
+
+ 3. Convert the **Smart Transform** component's `Data` output to `DataFrame`:
+
+ 1. Connect the **Filtered Data** output to the **Type Convert** component's **Data** input.
+ 2. Set the **Type Convert** component's **Output Type** to **DataFrame**.
+
+ Now the flow is ready for you to add the **DataFrame Operations** component.
+
+
+
+2. Add a **DataFrame Operations** component to the flow, and then connect `DataFrame` output from another component to the **DataFrame** input.
+
+ All operations in the **DataFrame Operations** component require at least one `DataFrame` input from another component.
+ If a component doesn't produce `DataFrame` output, you can use another component, such as the [**Type Convert** component](/type-convert), to reformat the data before passing it to the **DataFrame Operations** component.
+ Alternatively, you could consider using a component that is designed to process the original data type, such as the [**Parser** component](/parser) or [**Data Operations** component](/data-operations).
+
+ If you are following along with the example flow, connect the **Type Convert** component's **DataFrame Output** port to the **DataFrame** input.
+
+3. In the **Operations** field, select the operation you want to perform on the incoming `DataFrame`.
+For example, the **Filter** operation filters the rows based on a specified column and value.
+
+ :::tip
+ You can select only one operation.
+ If you need to perform multiple operations on the data, you can chain multiple **DataFrame Operations** components together to execute each operation in sequence.
+ For more complex multi-step operations, like dramatic schema changes or pivots, consider using an LLM-powered component, like the [**Structured Output** component](/structured-output) or [**Smart Transform** component](/smart-transform), as a replacement or preparation for the **DataFrame Operations** component.
+ :::
+
+ If you're following along with the example flow, select any operation that you want to apply to the data that was extracted by the **Smart Transform** component.
+ To view the contents of the incoming `DataFrame`, click **Run component** on the **Type Convert** component, and then **Inspect output**.
+ If the `DataFrame` seems malformed, click **Inspect output** on each upstream component to determine where the error occurs, and then modify your flow's configuration as needed.
+ For example, if the **Smart Transform** component didn't extract the expected fields, modify your instructions or verify that the given fields are present in the **API Response** output.
+
+4. Configure the operation's parameters.
+The specific parameters depend on the selected operation.
+For example, if you select the **Filter** operation, you must define a filter condition using the **Column Name**, **Filter Value**, and **Filter Operator** parameters.
+For more information, see [DataFrame Operations parameters](#dataframe-operations-parameters)
+
+5. To test the flow, click **Run component** on the **DataFrame Operations** component, and then click **Inspect output** to view the new `DataFrame` created from the **Filter** operation.
+
+ If you want to view the output in the **Playground**, connect the **DataFrame Operations** component's output to a **Chat Output** component, rerun the **DataFrame Operations** component, and then click **Playground**.
+
+For another example, see [Conditional looping](/loop#conditional-looping).
+
+## DataFrame Operations parameters
+
+Most **DataFrame Operations** parameters are conditional because they only apply to specific operations.
+
+The only permanent parameters are **DataFrame** (`df`), which is the `DataFrame` input, and **Operation** (`operation`), which is the operation to perform on the `DataFrame`.
+Once you select an operation, the conditional parameters for that operation appear on the **DataFrame Operations** component.
+
+
+
+
+The **Add Column** operation allows you to add a new column to the `DataFrame` with a constant value.
+
+The parameters are **New Column Name** (`new_column_name`) and **New Column Value** (`new_column_value`).
+
+
+
+
+The **Drop Column** operation allows you to remove a column from the `DataFrame`, specified by **Column Name** (`column_name`).
+
+
+
+
+The **Filter** operation allows you to filter the `DataFrame` based on a specified condition.
+The output is a `DataFrame` containing only the rows that matched the filter condition.
+
+Provide the following parameters:
+
+* **Column Name** (`column_name`): The name of the column to filter on.
+* **Filter Value** (`filter_value`): The value to filter on.
+* **Filter Operator** (`filter_operator`): The operator to use for filtering, one of `equals` (default), `not equals`, `contains`, `not contains`, `starts with`, `ends with`, `greater than`, or `less than`.
+
+
+
+
+The **Head** operation allows you to retrieve the first `n` rows of the `DataFrame`, where `n` is set in **Number of Rows** (`num_rows`).
+The default is `5`.
+
+The output is a `DataFrame` containing only the selected rows.
+
+
+
+
+The **Rename Column** operation allows you to rename an existing column in the `DataFrame`.
+
+The parameters are **Column Name** (`column_name`), which is the current name, and **New Column Name** (`new_column_name`).
+
+
+
+
+The **Replace Value** operation allows you to replace values in a specific column of the `DataFrame`.
+This operation replaces a target value with a new value.
+All cells matching the target value are replaced with the new value in the new `DataFrame` output.
+
+Provide the following parameters:
+
+* **Column Name** (`column_name`): The name of the column to modify.
+* **Value to Replace** (`replace_value`): The value that you want to replace.
+* **Replacement Value** (`replacement_value`): The new value to use.
+
+
+
+
+The **Select Columns** operation allows you to select one or more specific columns from the `DataFrame`.
+
+Provide a list of column names in **Columns to Select** (`columns_to_select`).
+In the visual editor, click **Add More** to add multiple fields, and then enter one column name in each field.
+
+The output is a `DataFrame` containing only the specified columns.
+
+
+
+
+The **Sort** operation allows you to sort the `DataFrame` on a specific column in ascending or descending order.
+
+Provide the following parameters:
+
+* **Column Name** (`column_name`): The name of the column to sort on.
+* **Sort Ascending** (`ascending`): Whether to sort in ascending or descending order. If enabled (`true`), sorts in ascending order; if disabled (`false`), sorts in descending order. Default: Enabled (`true`)
+
+
+
+
+The **Tail** operation allows you to retrieve the last `n` rows of the `DataFrame`, where `n` is set in **Number of Rows** (`num_rows`).
+The default is `5`.
+
+The output is a `DataFrame` containing only the selected rows.
+
+
+
+
+The **Drop Duplicates** operation removes rows from the `DataFrame` by identifying all duplicate values within a single column.
+
+The only parameter is the **Column Name** (`column_name`).
+
+When the flow runs, all rows with duplicate values in the given column are removed.
+The output is a `DataFrame` containing all columns from the original `DataFrame`, but only rows with non-duplicate values.
+
+
+
+
diff --git a/docs/docs/Components/directory.mdx b/docs/docs/Components/directory.mdx
new file mode 100644
index 000000000000..fe71986de64a
--- /dev/null
+++ b/docs/docs/Components/directory.mdx
@@ -0,0 +1,32 @@
+---
+title: Directory
+slug: /directory
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Directory** component recursively loads files from a directory, with options for file types, depth, and concurrency.
+
+Files must be of a [supported type and size](/read-file#file-type-and-size-limits) to be loaded.
+
+Outputs either a [`Data`](/data-types#data) or [`DataFrame`](/data-types#dataframe) object, depending on the directory contents.
+
+## Directory parameters
+
+
+
+| Name | Type | Description |
+| ------------------ | ---------------- | -------------------------------------------------- |
+| path | MessageTextInput | Input parameter. The path to the directory to load files from. Default: Current directory (`.`) |
+| types | MessageTextInput | Input parameter. The file types to load. Select one or more, or leave empty to attempt to load all files. |
+| depth | IntInput | Input parameter. The depth to search for files. |
+| max_concurrency | IntInput | Input parameter. The maximum concurrency for loading multiple files. |
+| load_hidden | BoolInput | Input parameter. If `true`, hidden files are loaded. |
+| recursive | BoolInput | Input parameter. If `true`, the search is recursive. |
+| silent_errors | BoolInput | Input parameter. If `true`, errors don't raise an exception. |
+| use_multithreading | BoolInput | Input parameter. If `true`, multithreading is used. |
+
diff --git a/docs/docs/Components/dynamic-create-data.mdx b/docs/docs/Components/dynamic-create-data.mdx
new file mode 100644
index 000000000000..97c4eeb6f855
--- /dev/null
+++ b/docs/docs/Components/dynamic-create-data.mdx
@@ -0,0 +1,51 @@
+---
+title: Dynamic Create Data
+slug: /dynamic-create-data
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **Dynamic Create Data** component creates a [`Data`](/data-types#data) object or [`Message`](/data-types#message) with configurable fields.
+Define the table in the **Input Configuration** field, and the component creates corresponding input or output handles in the component.
+
+## Use the Dynamic Create Data component in a flow
+
+The following example demonstrates how to use a **Dynamic Create Data** component to create a structured `Data` or `Message` object from multiple sources.
+
+1. Add the **Dynamic Create Data** component to your flow.
+
+2. To define your data's fields, in the **Input Configuration** field, click **Open table**.
+
+3. To add rows to your table, click **Add a new row**.
+ Adding a new row creates input and output handles for the **Field Type**.
+ For example, if you add a `Text` type field, then`Text` input and output handles are added to the component.
+ For each new row, configure the **Field Name** and **Field Type**.
+
+ * **Field Name**: The name of the field used as both the internal key and display label.
+ * **Field Type**: The type of input field to create. The type options are:
+ * Text: Accepts direct text input or accepts `Text` or `Message` output from other components.
+ * Data: Accepts `Data` input from other components.
+ * Number: Accepts direct numeric input or accepts `Text` or `Message` outputs from other components.
+ * Handle: Accepts `Text`, `Data`, or `Message` output from other components.
+ * Boolean: Accepts Boolean values. Cannot accept input from another component.
+
+ For more information, see [Langflow data types](/data-types).
+4. Depending on your **Field Type** selections, either connect output from other components to dynamically populate the inputs, or enter values manually in the **Dynamic Create Data** component's fields.
+
+5. Select the desired output type at the component's output port. The component outputs either a [`Data`](/data-types#data) object containing all field values from the component's inputs, or a [`Message`](/data-types#message) containing all field values formatted as a text string.
+
+## Dynamic Create Data parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| `form_fields` | **Input Configuration** | Input parameter. A table that defines the dynamic form fields. |
+| `include_metadata` | **Include Metadata** | Input parameter. Whether to include form configuration metadata in the output.|
+| `form_data` | **Data** | Output parameter. A `Data` object containing all field values from the dynamic inputs. |
+| `message` | **Message** | Output parameter. A formatted `Text` message containing all field values in a human-readable format. |
+
diff --git a/docs/docs/Components/if-else.mdx b/docs/docs/Components/if-else.mdx
new file mode 100644
index 000000000000..7f459076ed0d
--- /dev/null
+++ b/docs/docs/Components/if-else.mdx
@@ -0,0 +1,94 @@
+---
+title: If-Else
+slug: /if-else
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **If-Else** component is a conditional router that routes messages by comparing two strings.
+It evaluates a condition by comparing two text inputs using the specified operator, and then routes the message to `true_result` or `false_result` depending on the evaluation.
+
+The operator looks for single strings in the input (`input_text`) based on an operator and match text (`match_text`), but it can also search for multiple words by matching a regex.
+Available operators include:
+
+- **equals**: Exact match comparison
+- **not equals**: Inverse of exact match
+- **contains**: Checks if the `match_text` is found within `input_text`
+- **starts with**: Checks if `input_text` begins with `match_text`
+- **ends with**: Checks if `input_text` ends with `match_text`
+- **regex**: Matches on a case-sensitive pattern
+
+By default, all operators are case insensitive except **regex**.
+**regex** is always case sensitive, and you can enable case sensitivity for all other operators in the [If-Else parameters](#if-else-parameters).
+
+## Use the If-Else component in a flow
+
+The following example uses the **If-Else** component to check incoming chat messages with regex matching, and then output a different response depending on whether the match evaluated to true or false.
+
+
+
+1. Add an **If-Else** component to your flow, and then configure it as follows:
+
+ * **Text Input**: Connect the **Text Input** port to a **Chat Input** component or another `Message` input.
+
+ If your input isn't in `Message` format, you can use another component to transform it, such as the [**Type Convert** component](/type-convert) or [**Parser** component](/parser).
+ If your input isn't appropriate for `Message` format, consider using another component for conditional routing, such as the [**Data Operations** component](/data-operations).
+
+ * **Match Text**: Enter `.*(urgent|warning|caution).*` so the component looks for these values in incoming input. The regex match is case sensitive, so if you need to look for all permutations of `warning`, enter `warning|Warning|WARNING`.
+
+ * **Operator**: Select **regex**.
+
+ * **Case True**: In the [component's header menu](/concepts-components#component-menus), click **Controls**, enable the **Case True** parameter, click **Close**, and then enter `New Message Detected` in the field.
+
+ The **Case True** message is sent from the **True** output port when the match condition evaluates to true.
+
+ No message is set for **Case False** so the component doesn't emit a message when the condition evaluates to false.
+
+3. Depending on what you want to happen when the outcome is **True**, add components to your flow to execute that logic:
+
+ 1. Add a **Language Model**, **Prompt Template**, and **Chat Output** component to your flow.
+
+ 2. In the **Language Model** component, enter your OpenAI API key or select a different provider and model.
+
+ 3. Connect the **If-Else** component's **True** output port to the **Language Model** component's **Input** port.
+
+ 4. In the **Prompt Template** component, enter instructions for the model when the evaluation is true, such as `Send a message that a new warning, caution, or urgent message was received`.
+
+ 5. Connect the **Prompt Template** component to the **Language Model** component's **System Message** port.
+
+ 6. Connect the **Language Model** component's output to the **Chat Output** component.
+
+4. Repeat the same process with another set of **Language Model**, **Prompt Template**, and **Chat Output** components for the **False** outcome.
+
+ Connect the **If-Else** component's **False** output port to the second **Language Model** component's **Input** port.
+ In the second **Prompt Template**, enter instructions for the model when the evaluation is false, such as `Send a message that a new low-priority message was received`.
+
+5. To test the flow, open the **Playground**, and then send the flow some messages with and without your regex strings.
+The chat output should reflect the instructions in your prompts based on the regex evaluation.
+
+ ```text
+ User: A new user was created.
+
+ AI: A new low-priority message was received.
+
+ User: Sign-in warning: new user locked out.
+
+ AI: A new warning, caution, or urgent message was received. Please review it at your earliest convenience.
+ ```
+
+## If-Else parameters
+
+
+
+| Name | Type | Description |
+|----------------|----------|-------------------------------------------------------------------|
+| input_text | String | Input parameter. The primary text input for the operation. |
+| match_text | String | Input parameter. The text to compare against. |
+| operator | Dropdown | Input parameter. The operator used to compare texts. Options include `equals`, `not equals`, `contains`, `starts with`, `ends with`, and `regex`. The default is `equals`. |
+| case_sensitive | Boolean | Input parameter. When `true`, the comparison is case sensitive. The default is `false`. This setting doesn't apply to regex comparisons. |
+| max_iterations | Integer | Input parameter. The maximum number of iterations allowed for the conditional router. The default is 10. |
+| default_route | Dropdown | Input parameter. The route to take when max iterations are reached. Options include `true_result` or `false_result`. The default is `false_result`. |
+| true_result | Message | Output parameter. The output produced when the condition is true. |
+| false_result | Message | Output parameter. The output produced when the condition is false. |
+
diff --git a/docs/docs/Components/legacy-core-components.mdx b/docs/docs/Components/legacy-core-components.mdx
new file mode 100644
index 000000000000..09460c801290
--- /dev/null
+++ b/docs/docs/Components/legacy-core-components.mdx
@@ -0,0 +1,344 @@
+---
+title: Legacy core components
+slug: /legacy-core-components
+---
+
+import Icon from "@site/src/components/icon";
+import PartialLegacy from '@site/docs/_partial-legacy.mdx';
+
+
+
+## Legacy Data components
+
+The following Data components are in legacy status:
+
+* **Load CSV**
+* **Load JSON**
+
+Replace these components with the [**Read File** component](/read-file), which supports loading CSV and JSON files, as well as many other file types.
+
+## Legacy Helper components
+
+The following Helper components are in legacy status:
+
+* **Message Store**: Replaced by the [**Message History** component](/message-history)
+* **Create List**: Replace with [Processing components](/concepts-components)
+* **ID Generator**: Replace with a component that executes arbitrary code to generate an ID or embed an ID generator script your application code (external to your Langflow flows).
+* **Output Parser**: Replace with the [**Structured Output** component](/structured-output) and [**Parser** component](/parser).
+The components you need depend on the data types and complexity of the parsing task.
+
+ The **Output Parser** component transformed the output of a language model into comma-separated values (CSV) format, such as `["item1", "item2", "item3"]`, using LangChain's `CommaSeparatedListOutputParser`.
+ The **Structured Output** component is a good alternative for this component because it also formats LLM responses with support for custom schemas and more complex parsing.
+
+ **Parsing** components only provide formatting instructions and parsing functionality.
+ _They don't include prompts._
+ You must connect parsers to **Prompt Template** components to create prompts that LLMs can use.
+
+## Legacy Logic components
+
+The following Logic components are in legacy status:
+
+
+Condition
+
+As an alternative to this legacy component, see the [**If-Else** component](/if-else).
+
+The **Condition** component routes `Data` objects based on a condition applied to a specified key, including Boolean validation.
+It supports `true_output` and `false_output` for routing the results based on the condition evaluation.
+
+This component is useful in workflows that require conditional routing of complex data structures, enabling dynamic decision-making based on data content.
+
+It can process either a single `Data` object or a list of `Data` objects.
+The following actions occur when processing a list of `Data` objects:
+
+- Each object in the list is evaluated individually.
+- Objects meeting the condition go to `true_output`.
+- Objects not meeting the condition go to `false_output`.
+- If all objects go to one output, the other output is empty.
+
+The **Condition** component accepts the following parameters:
+
+| Name | Type | Description |
+|---------------|----------|---------------------------------------------|
+| data_input | Data | Input parameter. The Data object or list of Data objects to process. This input can handle both single items and lists. |
+| key_name | String | Input parameter. The name of the key in the Data object to check. |
+| operator | Dropdown | Input parameter. The operator to apply. Options: `equals`, `not equals`, `contains`, `starts with`, `ends with`, `boolean validator`. Default: `equals`. |
+| compare_value | String | Input parameter. The value to compare against. Not shown/used when operator is `boolean validator`. |
+
+The `operator` options have the following behaviors:
+
+- `equals`: Exact match comparison between the key's value and compare_value.
+- `not equals`: Inverse of exact match.
+- `contains`: Checks if compare_value is found within the key's value.
+- `starts with`: Checks if the key's value begins with compare_value.
+- `ends with`: Checks if the key's value ends with compare_value.
+- `boolean validator`: Treats the key's value as a Boolean. The following values are considered true:
+ - Boolean `true`.
+ - Strings: `true`, `1`, `yes`, `y`, `on` (case-insensitive)
+ - Any other value is converted using Python's `bool()` function
+
+
+
+
+Pass
+
+As an alternative to this legacy component, use the [**If-Else** component](/if-else) to pass a message without modification.
+
+The **Pass** component forwards the input message without modification.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_message | Input Message | Input parameter. The message to forward. |
+| ignored_message | Ignored Message | Input parameter. A second message that is ignored. Used as a workaround for continuity. |
+| output_message | Output Message | Output parameter. The forwarded message from the input. |
+
+
+
+
+Flow As Tool
+
+This component constructed a tool from a function that ran a loaded flow.
+
+It was deprecated in Langflow version 1.1.2 and replaced by the [**Run Flow** component](/run-flow).
+
+
+
+
+Sub Flow
+
+This component integrated entire flows as components within a larger workflow.
+It dynamically generated inputs based on the selected flow and executed the flow with provided parameters.
+
+It was deprecated in Langflow version 1.1.2 and replaced by the [**Run Flow** component](/run-flow).
+
+
+
+## Legacy Processing components
+
+The following Processing components are in legacy status:
+
+
+Alter Metadata
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component modifies metadata of input objects. It can add new metadata, update existing metadata, and remove specified metadata fields. The component works with both `Message` and `Data` objects, and can also create a new `Data` object from user-provided text.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_value | Input | Input parameter. Objects to which Metadata should be added. |
+| text_in | User Text | Input parameter. Text input; the value is contained in the 'text' attribute of the `Data` object. Empty text entries are ignored. |
+| metadata | Metadata | Input parameter. Metadata to add to each object. |
+| remove_fields | Fields to Remove | Input parameter. Metadata fields to remove. |
+| data | Data | Output parameter. List of Input objects, each with added metadata. |
+
+
+
+
+Combine Data
+
+Replace this legacy component with the [**Data Operations** component](/data-operations) or the [**Loop** component](/loop).
+
+This component combines multiple data sources into a single unified `Data` object.
+
+The component iterates through a list of `Data` objects, merging them into a single `Data` object (`merged_data`).
+If the input list is empty, it returns an empty data object.
+If there's only one input data object, it returns that object unchanged.
+
+The merging process uses the addition operator to combine data objects.
+
+
+
+
+Combine Text
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component concatenates two text inputs into a single text chunk using a specified delimiter, outputting a `Message` object with the combined text.
+
+
+
+
+Create Data
+
+Replace this legacy component with the [**Dynamic Create Data** component](/dynamic-create-data).
+This component dynamically creates a `Data` object with a specified number of fields and a text key.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| number_of_fields | Number of Fields | Input parameter. The number of fields to be added to the record. |
+| text_key | Text Key | Input parameter. Key that identifies the field to be used as the text content. |
+| text_key_validator | Text Key Validator | Input parameter. If enabled, checks if the given `Text Key` is present in the given `Data`. |
+
+
+
+
+Data to DataFrame/Data to Message
+
+Replace these legacy components with newer Processing components, such as the [**Data Operations** component](/data-operations) and [**Type Convert** component](/type-convert).
+
+These components converted one or more `Data` objects into a `DataFrame` or `Message` object.
+
+For the **Data to DataFrame** component, each `Data` object corresponds to one row in the resulting `DataFrame`.
+Fields from the `.data` attribute become columns, and the `.text` field (if present) is placed in a `text` column.
+
+
+
+
+Extract Key
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component extracts a specific key from a `Data` object and returns the value associated with that key.
+
+
+
+
+Filter Data
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component filters a `Data` object based on a list of keys (`filter_criteria`), returning a new `Data` object (`filtered_data`) that contains only the key-value pairs that match the filter criteria.
+
+
+
+
+Filter Values
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+The Filter values component filters a list of data items based on a specified key, filter value, and comparison operator.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_data | Input data | Input parameter. The list of data items to filter. |
+| filter_key | Filter Key | Input parameter. The key to filter on. |
+| filter_value | Filter Value | Input parameter. The value to filter by. |
+| operator | Comparison Operator | Input parameter. The operator to apply for comparing the values. |
+| filtered_data | Filtered data | Output parameter. The resulting list of filtered data items. |
+
+
+
+
+JSON Cleaner
+
+Replace this legacy component with the [**Parser** component](/parser).
+
+This component cleans JSON strings to ensure they are fully compliant with the JSON specification.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| json_str | JSON String | Input parameter. The JSON string to be cleaned. This can be a raw, potentially malformed JSON string produced by language models or other sources that may not fully comply with JSON specifications. |
+| remove_control_chars | Remove Control Characters | Input parameter. If set to `True`, this option removes control characters (ASCII characters 0-31 and 127) from the JSON string. This can help eliminate invisible characters that might cause parsing issues or make the JSON invalid. |
+| normalize_unicode | Normalize Unicode | Input parameter. When enabled, this option normalizes Unicode characters in the JSON string to their canonical composition form (NFC). This ensures consistent representation of Unicode characters across different systems and prevents potential issues with character encoding. |
+| validate_json | Validate JSON | Input parameter. If set to `True`, this option attempts to parse the JSON string to ensure it is well-formed before applying the final repair operation. It raises a ValueError if the JSON is invalid, allowing for early detection of major structural issues in the JSON. |
+| output | Cleaned JSON String | Output parameter. The resulting cleaned, repaired, and validated JSON string that fully complies with the JSON specification. |
+
+
+
+
+Message to Data
+
+Replace this legacy component with the [**Type Convert** component](/type-convert).
+
+This component converts `Message` objects to `Data` objects.
+
+
+
+
+Parse DataFrame
+
+Replace this legacy component with the [**DataFrame Operations** component](/dataframe-operations) or [**Parser** component](/parser).
+
+This component converts `DataFrame` objects into plain text using templates.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| df | DataFrame | Input parameter. The DataFrame to convert to text rows. |
+| template | Template | Input parameter. Template for formatting (use `{column_name}` placeholders). |
+| sep | Separator | Input parameter. String to join rows in output. |
+| text | Text | Output parameter. All rows combined into single text. |
+
+
+
+
+Parse JSON
+
+Replace this legacy component with the [**Parser** component](/parser).
+
+This component converts and extracts JSON fields in `Message` and `Data` objects using JQ queries, then returns `filtered_data`, which is a list of `Data` objects.
+
+
+
+
+Regex Extractor
+
+Replace this legacy component with the [**Parser** component](/parser).
+
+This component extracts patterns in text using regular expressions. It can be used to find and extract specific patterns or information in text.
+
+
+
+
+Select Data
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component selects a single `Data` object from a list.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| data_list | Data List | Input parameter. List of data to select from |
+| data_index | Data Index | Input parameter. Index of the data to select |
+| selected_data | Selected Data | Output parameter. The selected `Data` object. |
+
+
+
+
+Update Data
+
+Replace this legacy component with the [**Data Operations** component](/data-operations).
+
+This component dynamically updates or appends data with specified fields.
+
+It accepts the following parameters:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| old_data | Data | Input parameter. The records to update. |
+| number_of_fields | Number of Fields | Input parameter. The number of fields to add. The maximum is 15. |
+| text_key | Text Key | Input parameter. The key for text content. |
+| text_key_validator | Text Key Validator | Input parameter. Validates the text key presence. |
+| data | Data | Output parameter. The updated Data objects. |
+
+
+
+## Legacy Tools components
+
+The following Tools components are in legacy status:
+
+* **Calculator Tool**: Replaced by the [**Calculator** component](/calculator).
+* **Python Code Structured**: Replaced by the [**Python Interpreter** component](/python-interpreter).
+* **Python REPL**: Replaced by the [**Python Interpreter** component](/python-interpreter).
+* **Search API**: Replaced by the [**SearchApi** bundle](/bundles-searchapi).
+* **SearXNG Search**: No direct replacement. Use another provider's search component, create a custom component, or use a core component like the [**API Request** component](/api-request).
+* **Serp Search API**: Replace by the **SerpApi** bundle.
+* **Tavily Search API**: Replaced by the **Tavily** bundle.
+* **Wikidata API**: Replaced by the [**Wikipedia** bundle](/bundles-wikipedia).
+* **Wikipedia API**: Replaced by the [**Wikipedia** bundle](/bundles-wikipedia).
+* **Yahoo! Finance**: Replaced by the **Yahoo! Search** bundle.
+
diff --git a/docs/docs/Components/llm-selector.mdx b/docs/docs/Components/llm-selector.mdx
new file mode 100644
index 000000000000..e652f2b46db1
--- /dev/null
+++ b/docs/docs/Components/llm-selector.mdx
@@ -0,0 +1,67 @@
+---
+title: LLM Selector
+slug: /llm-selector
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+:::tip
+Prior to Langflow 1.7, this component was called the **LLM Router**.
+:::
+
+The **LLM Selector** component routes requests to the most appropriate LLM based on [OpenRouter](https://openrouter.ai/docs/quickstart) model specifications.
+
+To use the component in a flow, you connect multiple language model components to the **LLM Selector** components.
+One model is the judge LLM that analyzes input messages to understand the evaluation context, selects the most appropriate model from the other attached LLMs, and then routes the input to the selected model.
+The selected model processes the input, and then returns the generated response.
+
+The following example flow has three language model components.
+One is the judge LLM, and the other two are in the LLM pool for request routing.
+The input and output components create a seamless chat interaction where you send a message and receive a response without any user awareness of the underlying routing.
+
+
+
+## LLM Selector parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| `models` | **Language Models** | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from multiple [language model components](/components-models) to create a pool of models. The `judge_llm` selects models from this pool when routing requests. The first model you connect is the default model if there is a problem with model selection or routing. |
+| `input_value` | **Input** | Input parameter. The incoming query to be routed to the model selected by the judge LLM. |
+| `judge_llm` | **Judge LLM** | Input parameter. Connect `LanguageModel` output from _one_ **Language Model** component to serve as the judge LLM for request routing. |
+| `optimization` | **Optimization** | Input parameter. Set a preferred characteristic for model selection by the judge LLM. The options are `quality` (highest response quality), `speed` (fastest response time), `cost` (most cost-effective model), or `balanced` (equal weight for quality, speed, and cost). Default: `balanced` |
+| `use_openrouter_specs` | **Use OpenRouter Specs** | Input parameter. Whether to fetch model specifications from the OpenRouter API. If `false`, only the model name is provided to the judge LLM. Default: Enabled (`true`) |
+| `timeout` | **API Timeout** | Input parameter. Set a timeout duration in seconds for API requests made by the router. Default: `10` |
+| `fallback_to_first` | **Fallback to First Model** | Input parameter. Whether to use the first LLM in `models` as a backup if routing fails to reach the selected model. Default: Enabled (`true`) |
+
+## LLM Selector outputs
+
+The **LLM Selector** component provides three output options.
+You can set the desired output type near the component's output port.
+
+* **Output**: A `Message` containing the response to the original query as generated by the selected LLM.
+Use this output for regular chat interactions.
+
+* **Selected Model Info**: A `Data` object containing information about the selected model, such as its name and version.
+
+* **Routing Decision**: A `Message` containing the judge model's reasoning for selecting a particular model, including input query length and number of models considered.
+For example:
+
+ ```text
+ Model Selection Decision:
+ - Selected Model Index: 0
+ - Selected Langflow Model Name: gpt-4o-mini
+ - Selected API Model ID (if resolved): openai/gpt-4o-mini
+ - Optimization Preference: cost
+ - Input Query Length: 27 characters (~5 tokens)
+ - Number of Models Considered: 2
+ - Specifications Source: OpenRouter API
+ ```
+
+ This is useful for debugging if you feel the judge model isn't selecting the best model.
+
diff --git a/docs/docs/Components/loop.mdx b/docs/docs/Components/loop.mdx
new file mode 100644
index 000000000000..e253183a7f65
--- /dev/null
+++ b/docs/docs/Components/loop.mdx
@@ -0,0 +1,71 @@
+---
+title: Loop
+slug: /loop
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **Loop** component iterates over a list of input by passing individual items to other components attached at the **Item** output port until there are no items left to process.
+Then, the **Loop** component passes the aggregated result of all looping to the component connected to the **Done** port.
+
+## The looping process
+
+The **Loop** component is like a miniature flow within your flow.
+Here's a breakdown of the looping process:
+
+1. Accepts a list of [`Data`](/data-types#data) or [`DataFrame`](/data-types#dataframe) objects, such as a CSV file, through the **Loop** component's **Inputs** port.
+
+2. Splits the input into individual items. For example, a CSV file is broken down by rows.
+
+ Specifically, the **Loop** component repeatedly extracts items by `text` key in the `Data` or `DataFrame` objects until there are no more items to extract.
+ Each `item` output is a `Data` objects.
+
+3. Iterates over each `item` by passing them to the **Item** output port.
+
+ This port connects to one or more components that perform actions on each item.
+ The final component in the **Item** loop connects back to the **Loop** component's **Looping** port to process the next item.
+
+ Only one component connects to the **Item** port, but you can pass the data through as many components as you need, as long as the last component in the chain connects back to the **Looping** port.
+
+ The [**If-Else** component](/if-else) isn't compatible with the **Loop** component.
+ For more information, see [Conditional looping](#conditional-looping).
+
+4. After processing all items, the results are aggregated into a single `Data` object that is passed from the **Loop** component's **Done** port to the next component in the flow.
+
+The following simplified Python code summarizes how the **Loop** component works.
+This _isn't_ the actual component code; it is only meant to help you understand the general process.
+
+```python
+for i in input: # Receive input data as a list
+ process_item(i) # Process each item through components connected at the Item port
+ if has_more_items():
+ continue # Loop back to Looping port to process the next item
+ else:
+ break # Exit the loop when no more items are left
+
+done = aggregate_results() # Compile all returned items
+
+print(done) # Send the aggregated results from the Done port to another component
+```
+
+## Loop example
+
+In the follow example, the **Loop** component iterates over a CSV file until there are no rows left to process.
+In this case, the **Item** port passes each row to a **Type Convert** component to convert the row into a `Message` object, passes the `Message` to a **Structured Output** component to be processed into structured data that is then passed back to the **Loop** component's **Looping** port.
+After processing all rows, the **Loop** component loads the aggregated list of structured data into a Chroma DB database through the **Chroma DB** component connected to the **Done** port.
+
+
+
+:::tip
+For more examples of the **Loop** component, try the **Research Translation Loop** template in Langflow, or see the video tutorial [Mastering the Loop Component & Agentic RAG in Langflow](https://www.youtube.com/watch?v=9Wx7WODSKTo).
+:::
+
+## Conditional looping
+
+The [**If-Else** component](/if-else) isn't compatible with the **Loop** component.
+If you need conditional loop events, redesign your flow to process conditions before the loop.
+For example, if you are looping over a `DataFrame`, you could use multiple [**DataFrame Operations** components](/dataframe-operations) to conditionally filter data, and then run separate loops on each set of filtered data.
+
+
+
diff --git a/docs/docs/Components/mcp-tools.mdx b/docs/docs/Components/mcp-tools.mdx
new file mode 100644
index 000000000000..07138ec76baf
--- /dev/null
+++ b/docs/docs/Components/mcp-tools.mdx
@@ -0,0 +1,32 @@
+---
+title: MCP Tools
+slug: /mcp-tools
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **MCP Tools** component connects to a Model Context Protocol (MCP) server and exposes the MCP server's functions as tools for Langflow agents to use to respond to input.
+
+In addition to publicly available MCP servers and your own custom-built MCP servers, you can connect Langflow MCP servers, which allow your agent to use your Langflow flows as tools.
+To do this, use the **MCP Tools** component's [HTTP/SSE mode](/mcp-client#mcp-http-mode) to connect to your Langflow project's MCP server.
+
+For more information, see [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server).
+
+## MCP Tools parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| mcp_server | String | Input parameter. The MCP server to connect to. Select from previously configured servers or add a new one. |
+| tool | String | Input parameter. The specific tool to execute from the connected MCP server. Leave blank to allow access to all tools. |
+| use_cache | Boolean | Input parameter. Enable caching of MCP server and tools to improve performance. Default: `false`. |
+| verify_ssl | Boolean | Input parameter. Enable SSL certificate verification for HTTPS connections. Default: `true`. |
+| response | DataFrame | Output parameter. [`DataFrame`](/data-types#dataframe) containing the response from the executed tool. |
+
+
+Earlier versions of the MCP Tools component
+
+* In Langflow version 1.5, the **MCP Connection** component was renamed to the **MCP Tools** component.
+* In Langflow version 1.3, the **MCP Tools (stdio)** and **MCP Tools (SSE)** components were removed and replaced by the unified **MCP Connection** component, which was later renamed to **MCP Tools**.
+
+
\ No newline at end of file
diff --git a/docs/docs/Components/components-helpers.mdx b/docs/docs/Components/message-history.mdx
similarity index 80%
rename from docs/docs/Components/components-helpers.mdx
rename to docs/docs/Components/message-history.mdx
index bdbb5d9d7511..fc6071e3f1b9 100644
--- a/docs/docs/Components/components-helpers.mdx
+++ b/docs/docs/Components/message-history.mdx
@@ -1,40 +1,12 @@
---
-title: Helpers
-slug: /components-helpers
+title: Message History
+slug: /message-history
---
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
-
-Helper components provide utility functions to help manage data and perform simple tasks in your flow.
-
-## Calculator
-
-The **Calculator** component performs basic arithmetic operations on mathematical expressions.
-It supports addition, subtraction, multiplication, division, and exponentiation operations.
-
-For an example of using this component in a flow, see the [**Python Interpreter** component](/components-processing#python-interpreter).
-
-### Calculator parameters
-
-| Name | Type | Description |
-|------|------|-------------|
-| expression | String | Input parameter. The arithmetic expression to evaluate, such as `4*4*(33/22)+12-20`. |
-| result | Data | Output parameter. The calculation result as a [`Data` object](/data-types) containing the evaluated expression. |
-
-## Current Date
-
-The **Current Date** component returns the current date and time in a selected timezone. This component provides a flexible way to obtain timezone-specific date and time information within a Langflow pipeline.
-
-### Current Date parameters
-
-| Name | Type | Description |
-|------|------|-------------|
-| timezone | String | Input parameter. The timezone for the current date and time. |
-| current_date | String | Output parameter. The resulting current date and time in the selected timezone. |
-
-## Message History
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
The **Message History** component provides combined chat history and message storage functionality.
It can store and retrieve chat messages from either [Langflow storage](/memory) _or_ a dedicated chat memory database like Mem0 or Redis.
@@ -52,7 +24,7 @@ Use the **Message History** component for the following use cases:
For more information, see [Store chat memory](/memory#store-chat-memory).
:::
-### Use the Message History component in a flow
+## Use the Message History component in a flow
The **Message History** component has two modes, depending on where you want to use it in your flow:
@@ -175,9 +147,7 @@ Other options include the [**Mem0 Chat Memory** component](/bundles-mem0) and [*
-### Message History parameters
-
-import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+## Message History parameters
@@ -211,7 +181,7 @@ The available parameters depend on whether the component is in **Retrieve** or *
-### Message History output
+## Message History output
Memories can be retrieved in one of two formats:
@@ -221,25 +191,4 @@ This is the typical output format used to pass memories _as chat messages_ to an
* **DataFrame**: Returns memories as a `DataFrame` containing the message data.
Useful for cases where you need to retrieve memories in a tabular format rather than as chat messages.
-You can set the output type near the component's output port.
-
-## Legacy Helper components
-
-import PartialLegacy from '@site/docs/_partial-legacy.mdx';
-
-
-
-The following Helper components are in legacy status:
-
-* **Message Store**: Replaced by the [**Message History** component](#message-history)
-* **Create List**: Replace with [Processing components](/components-processing)
-* **ID Generator**: Replace with a component that executes arbitrary code to generate an ID or embed an ID generator script your application code (external to your Langflow flows).
-* **Output Parser**: Replace with the [**Structured Output** component](/components-processing#structured-output) and [**Parser** component](/components-processing#parser).
-The components you need depend on the data types and complexity of the parsing task.
-
- The **Output Parser** component transformed the output of a language model into comma-separated values (CSV) format, such as `["item1", "item2", "item3"]`, using LangChain's `CommaSeparatedListOutputParser`.
- The **Structured Output** component is a good alternative for this component because it also formats LLM responses with support for custom schemas and more complex parsing.
-
- **Parsing** components only provide formatting instructions and parsing functionality.
- _They don't include prompts._
- You must connect parsers to **Prompt Template** components to create prompts that LLMs can use.
\ No newline at end of file
+You can set the output type near the component's output port.
\ No newline at end of file
diff --git a/docs/docs/Components/mock-data.mdx b/docs/docs/Components/mock-data.mdx
new file mode 100644
index 000000000000..d01700b4e234
--- /dev/null
+++ b/docs/docs/Components/mock-data.mdx
@@ -0,0 +1,18 @@
+---
+title: Mock Data
+slug: /mock-data
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Mock Data** component generates sample data for testing and development.
+You can select these output types:
+
+* `message_output`: A [Message (text)](/data-types#message) output with Lorem Ipsum sample text.
+* `data_output`: A [Data (JSON)](/data-types#data) object containing a JSON structure with one sample record under `records` and a `summary` section.
+* `dataframe_output`: A [DataFrame (tabular)](/data-types#dataframe) with 50 mock records, including columns such as `customer_id`, `first_name`, and `last_name`.
+
diff --git a/docs/docs/Components/notify-and-listen.mdx b/docs/docs/Components/notify-and-listen.mdx
new file mode 100644
index 000000000000..310eb756047e
--- /dev/null
+++ b/docs/docs/Components/notify-and-listen.mdx
@@ -0,0 +1,15 @@
+---
+title: Notify and Listen
+slug: /notify-and-listen
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **Notify** and **Listen** components are used together.
+
+The **Notify** component builds a notification from the current flow's context, including specific data content and a status identifier.
+
+The resulting notification is sent to the **Listen** component.
+The notification data can then be passed to other components in the flow, such as the [**If-Else** component](/if-else).
+
diff --git a/docs/docs/Components/parser.mdx b/docs/docs/Components/parser.mdx
new file mode 100644
index 000000000000..d92a1854c007
--- /dev/null
+++ b/docs/docs/Components/parser.mdx
@@ -0,0 +1,142 @@
+---
+title: Parser
+slug: /parser
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **Parser** component extracts text from structured data (`DataFrame` or `Data`) using a template or direct stringification.
+The output is a `Message` containing the parsed text.
+
+This is a versatile component for data extraction and manipulation in your flows.
+For examples of **Parser** components in flows, see the following:
+
+* [**Batch Run** component example](/batch-run)
+* [**Structured Output** component example](/structured-output)
+* **Financial Report Parser** template
+* [Trigger flows with webhooks](/webhook)
+* [Create a vector RAG chatbot](/chat-with-rag)
+
+
+
+## Parsing modes
+
+The **Parser** component has two modes: **Parser** and **Stringify**.
+
+
+
+
+In **Parser** mode, you create a template for text output that can include literal strings and variables for extracted keys.
+
+Use curly braces to define variables anywhere in the template.
+Variables must match keys in the `DataFrame` or `Data` input, such as column names.
+For example, `{name}` extracts the value of a `name` key.
+For more information about the content and structure of `DataFrame` and `Data` objects, see [Langflow data types](/data-types).
+
+
+
+When the flow runs, the **Parser** component iterates over the input, producing a `Message` for each parsed item.
+For example, parsing a `DataFrame` creates a `Message` for each row, populated with the unique values from that row.
+
+
+Employee summary template
+
+This example template extracts employee data into a natural language summary about an employee's hire date and current role:
+
+```text
+{employee_first_name} {employee_last_name} was hired on {start_date}.
+Their current position is {job_title} ({grade}).
+```
+
+The resulting `Message` output replaces the variables with the corresponding extracted values.
+For example:
+
+```text
+Renlo Kai was hired on 11-July-2017.
+Their current position is Software Engineer (Principal).
+```
+
+
+
+
+Employee profile template
+
+This example template uses Markdown syntax and extracted employee data to create an employee profile:
+
+```text
+# Employee Profile
+## Personal Information
+- **Name:** {name}
+- **ID:** {id}
+- **Email:** {email}
+```
+
+When the flow runs, the **Parser** component iterates over each row of the `DataFrame`, populating the template's variables with the appropriate extracted values.
+The resulting text for each row is output as a [`Message`](/data-types#message).
+
+
+
+The following parameters are available in **Parser** mode.
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_data | Data or DataFrame | Input parameter. The `Data` or `DataFrame` input to parse. |
+| pattern | Template | Input parameter. The formatting template using plaintext and variables for keys (`{KEY_NAME}`). See the preceding examples for more information. |
+| sep | Separator | Input parameter. A string defining the separator for rows or lines. Default: `\n` (new line). |
+| clean_data | Clean Data | Whether to remove empty rows and lines in each cell or key of the `DataFrame` or `Data` input. Default: Enabled (`true`) |
+
+
+
+
+Use **Stringify** mode to convert the entire input directly to text.
+This mode doesn't support templates or key selection.
+
+The following parameters are available in **Stringify** mode.
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_data | Data or DataFrame | Input parameter. The `Data` or `DataFrame` input to parse. |
+| sep | Separator | Input parameter. A string defining the separator for rows or lines. Default: `\n` (new line). |
+| clean_data | Clean Data | Whether to remove empty rows and lines in each cell or key of the `DataFrame` or `Data` input. Default: Enabled (`true`) |
+
+
+
+
+## Test and troubleshoot parsed text
+
+To test the **Parser** component, click **Run component**, and then click **Inspect output** to see the `Message` output with the parsed text.
+You can also connect a **Chat Output** component if you want to view the output in the **Playground**.
+
+If the `Message` output from the **Parser** component has empty or unexpected values, there might be a mapping error between the input and the parsing mode, the input has empty values, or the input isn't suitable for plaintext extraction.
+
+For example, assume you use the following template to parse a `DataFrame`:
+
+```text
+{employee_first_name} {employee_last_name} is a {job_title} ({grade}).
+```
+
+The following `Message` could result from parsing a row where `employee_first_name` was empty and `grade` was `null`:
+
+```text
+ Smith is a Software Engineer (null).
+```
+
+To troubleshoot missing or unexpected values, you can do the following:
+
+* Make sure the variables in your template map to keys in the incoming `Data` or `DataFrame`.
+To see the data being passed directly to the **Parser** component, click **Inspect output** on the component that is sending data to the **Parser** component.
+
+* Check the source data for missing or incorrect values.
+There are several ways you can address these inconsistencies:
+
+ * Rectify the source data directly.
+ * Use other components to amend or filter anomalies before passing the data to the **Parser** component.
+ There are many components you can use for this depending on your goal, such as the [**Data Operations** component](/data-operations), [**Structured Output** component](/structured-output), and [**Smart Transform** component](/smart-transform).
+ * Enable the **Parser** component's **Clean Data** parameter to skip empty rows or lines.
+
diff --git a/docs/docs/Components/python-interpreter.mdx b/docs/docs/Components/python-interpreter.mdx
new file mode 100644
index 000000000000..3cfb5dded0dd
--- /dev/null
+++ b/docs/docs/Components/python-interpreter.mdx
@@ -0,0 +1,86 @@
+---
+title: Python Interpreter
+slug: /python-interpreter
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+This component allows you to execute Python code with imported packages.
+
+The **Python Interpreter** component can only import packages that are already installed in your Langflow environment.
+If you encounter an `ImportError` when trying to use a package, you need to install it first.
+
+To install custom packages, see [Install custom dependencies](/install-custom-dependencies).
+
+## Use the Python Interpreter in a flow
+
+1. To use this component in a flow, in the **Global Imports** field, add the packages you want to import as a comma-separated list, such as `math,pandas`.
+At least one import is required.
+2. In the **Python Code** field, enter the Python code you want to execute. Use `print()` to see the output.
+3. Optional: Enable **Tool Mode**, and then connect the **Python Interpreter** component to an **Agent** component as a tool.
+For example, connect a **Python Interpreter** component and a [**Calculator** component](/calculator) as tools for an **Agent** component, and then test how it chooses different tools to solve math problems.
+
+4. Ask the agent an easier math question.
+The **Calculator** tool can add, subtract, multiple, divide, or perform exponentiation.
+The agent executes the `evaluate_expression` tool to correctly answer the question.
+
+Result:
+```text
+Executed evaluate_expression
+Input:
+{
+ "expression": "2+5"
+}
+Output:
+{
+ "result": "7"
+}
+```
+
+5. Give the agent complete Python code.
+This example creates a Pandas DataFrame table with the imported `pandas` packages, and returns the square root of the mean squares.
+
+```python
+import pandas as pd
+import math
+
+# Create a simple DataFrame
+df = pd.DataFrame({
+ 'numbers': [1, 2, 3, 4, 5],
+ 'squares': [x**2 for x in range(1, 6)]
+})
+
+# Calculate the square root of the mean
+result = math.sqrt(df['squares'].mean())
+print(f"Square root of mean squares: {result}")
+```
+
+The agent correctly chooses the `run_python_repl` tool to solve the problem.
+
+Result:
+```text
+Executed run_python_repl
+
+Input:
+
+{
+ "python_code": "import pandas as pd\nimport math\n\n# Create a simple DataFrame\ndf = pd.DataFrame({\n 'numbers': [1, 2, 3, 4, 5],\n 'squares': [x**2 for x in range(1, 6)]\n})\n\n# Calculate the square root of the mean\nresult = math.sqrt(df['squares'].mean())\nprint(f\"Square root of mean squares: {result}\")"
+}
+Output:
+
+{
+ "result": "Square root of mean squares: 3.3166247903554"
+}
+```
+
+If you don't include the package imports in the chat, the agent can still create the table using `pd.DataFrame`, because the `pandas` package is imported globally by the **Python Interpreter** component in the **Global Imports** field.
+
+## Python Interpreter parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| global_imports | String | Input parameter. A comma-separated list of modules to import globally, such as `math,pandas,numpy`. |
+| python_code | Code | Input parameter. The Python code to execute. Only modules specified in Global Imports can be used. |
+| results | Data | Output parameter. The output of the executed Python code, including any printed results or errors. |
\ No newline at end of file
diff --git a/docs/docs/Components/read-file.mdx b/docs/docs/Components/read-file.mdx
new file mode 100644
index 000000000000..604b0375c51e
--- /dev/null
+++ b/docs/docs/Components/read-file.mdx
@@ -0,0 +1,172 @@
+---
+title: Read File
+slug: /read-file
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+import PartialDockerDoclingDeps from '@site/docs/_partial-docker-docling-deps.mdx';
+
+In Langflow version 1.7.0, this component was renamed from **File** to **Read File**.
+
+The **Read File** component loads and parses files, converts the content into a `Data`, `DataFrame`, or `Message` object.
+It supports multiple file types, provides parameters for parallel processing and error handling, and supports advanced parsing with the Docling library.
+
+You can add files to the **Read File** component in the visual editor or at runtime, and you can upload multiple files at once.
+For more information about uploading files and working with files in flows, see [File management](/concepts-file-management) and [Create a chatbot that can ingest files](/chat-with-files).
+
+## File type and size limits
+
+By default, the maximum file size is 1024 MB.
+To modify this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).
+
+
+Supported file types
+
+The following file types are supported by the **Read File** component.
+Use archive and compressed formats to bundle multiple files together, or use the [**Directory** component](/directory) to load all files in a directory.
+
+- `.bz2`
+- `.csv`
+- `.docx`
+- `.gz`
+- `.htm`
+- `.html`
+- `.json`
+- `.js`
+- `.md`
+- `.mdx`
+- `.pdf`
+- `.py`
+- `.sh`
+- `.sql`
+- `.tar`
+- `.tgz`
+- `.ts`
+- `.tsx`
+- `.txt`
+- `.xml`
+- `.yaml`
+- `.yml`
+- `.zip`
+
+
+
+If you need to load an unsupported file type, you must use a different component that supports that file type and, potentially, parses it outside Langflow, or you must convert it to a supported type before uploading it.
+
+For images, see [Upload images](/concepts-file-management#upload-images).
+
+For videos, see the **Twelve Labs** and **YouTube** [**Bundles**](/components-bundle-components).
+
+## File parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| path | Files | Input parameter. The path to files to load. Can be local or in [Langflow file management](/concepts-file-management). Supports individual files and bundled archives. |
+| file_path | Server File Path | Input parameter. A `Data` object with a `file_path` property pointing to a file in [Langflow file management](/concepts-file-management) or a `Message` object with a path to the file. Supersedes **Files** (`path`) but supports the same file types. |
+| separator | Separator | Input parameter. The separator to use between multiple outputs in `Message` format. |
+| silent_errors | Silent Errors | Input parameter. If `true`, errors in the component don't raise an exception. Default: Disabled (`false`). |
+| delete_server_file_after_processing | Delete Server File After Processing | Input parameter. If `true` (default), the **Server File Path** (`file_path`) is deleted after processing. |
+| ignore_unsupported_extensions | Ignore Unsupported Extensions | Input parameter. If enabled (`true`), files with unsupported extensions are accepted but not processed. If disabled (`false`), the **Read File** component either can throw an error if an unsupported file type is provided. The default is `true`. |
+| ignore_unspecified_files | Ignore Unspecified Files | Input parameter. If `true`, `Data` with no `file_path` property is ignored. If `false` (default), the component errors when a file isn't specified. |
+| concurrency_multithreading | Processing Concurrency | Input parameter. The number of files to process concurrently if multiple files are uploaded. Default is 1. Values greater than 1 enable parallel processing for 2 or more files. Ignored for single-file uploads and advanced parsing. |
+| advanced_parser | Advanced Parser | Input parameter. If `true`, enables [advanced parsing](#advanced-parsing). Only available for single-file uploads of compatible file types. Default: Disabled (`false`). |
+
+## Advanced parsing
+
+Starting in Langflow version 1.6, the **Read File** component supports advanced document parsing using the [Docling](https://docling-project.github.io/docling/) library for supported file types.
+
+To use advanced parsing, do the following:
+
+1. Complete the following prerequisites, if applicable:
+
+ * **Install Langflow version 1.6 or later**: Earlier versions don't support advanced parsing with the **Read File** component. For upgrade guidance, see the [Release notes](/release-notes).
+
+ * **Install Docling dependency on macOS Intel (x86_64)**: The Docling dependency isn't installed by default for macOS Intel (x86_64). Use the [Docling installation guide](https://docling-project.github.io/docling/installation/) to install the Docling dependency.
+
+ For all other operating systems, the Docling dependency is installed by default.
+
+
+
+ * **Enable Developer Mode for Windows**:
+
+
+ Developer Mode isn't required for Langflow OSS on Windows.
+
+2. Add one valid file to the **Read File** component.
+
+ :::info Advanced parsing limitations
+ * Advanced parsing processes only one file.
+ If you select multiple files, the **Read File** component processes the first file only, ignoring any additional files.
+ To process multiple files with advanced parsing, pass each file to a separate **Read File** components, or use the dedicated [**Docling** components](/bundles-docling).
+
+ * Advanced parsing can process any of the **Read File** component's supported file types except `.csv`, `.xlsx`, and `.parquet` files because it is designed for document processing, such as extracting text from PDFs.
+ For structured data analysis, use the [**Parser** component](/parser).
+ :::
+
+3. Enable **Advanced Parsing**.
+
+4. Click **Controls** in the [component's header menu](/concepts-components#component-menus) to configure advanced parsing parameters, which are hidden by default:
+
+ | Name | Display Name | Info |
+ |------|--------------|------|
+ | pipeline | Pipeline | Input parameter, advanced parsing. The Docling pipeline to use, either `standard` (default, recommended) or `vlm` (may produce inconsistent results). |
+ | ocr_engine | OCR Engine | Input parameter, advanced parsing. The OCR parser to use if `pipeline` is `standard`. Options are `None` (default) or [`EasyOCR`](https://pypi.org/project/easyocr/). `None` means that no OCR engine is used, and this can produce inconsistent or broken results for some documents. This setting has no effect with the `vlm` pipeline. |
+ | md_image_placeholder | Markdown Image Placeholder | Input parameter, advanced parsing. Defines the placeholder for image files if the output type is **Markdown**. Default: ``. |
+ | md_page_break_placeholder | Markdown Page Break Placeholder | Input parameter, advanced parsing. Defines the placeholder for page breaks if the output type is **Markdown**. Default: `""` (empty string). |
+ | doc_key | Document Key | Input parameter, advanced parsing. The key to use for the `DoclingDocument` column, which holds the structured information extracted from the source document. See [Docling Document](https://docling-project.github.io/docling/concepts/docling_document/) for details. Default: `doc`. |
+
+ :::tip
+ For additional Docling features, including other components and OCR parsers, use the [**Docling** bundle](/bundles-docling).
+ :::
+
+## File output
+
+The output of the **Read File** component depends on the number of files loaded and whether advanced parsing is enabled.
+If multiple options are available, you can set the output type near the component's output port.
+
+
+
+
+If you run the **Read File** component with no file selected, it throws an error, or, if **Silent Errors** is enabled, produces no output.
+
+
+
+
+If advanced parsing is disabled and you upload one file, the following output types are available:
+
+- **Structured Content**: Available only for `.csv`, `.xlsx`, `.parquet`, and `.json` files.
+
+ - For `.csv` files, produces a [`DataFrame`](/data-types#dataframe) representing the table data.
+ - For `.json` files, produces a [`Data`](/data-types#data) object with the parsed JSON data.
+
+- **Raw Content**: A [`Message`](/data-types#message) containing the file's raw text content.
+
+- **File Path**: A [`Message`](/data-types#message) containing the path to the file in [Langflow file management](/concepts-file-management).
+
+
+
+
+If advanced parsing is enabled and you upload one file, the following output types are available:
+
+- **Structured Output**: A [`DataFrame`](/data-types#dataframe) containing the Docling-processed document data with text elements, page numbers, and metadata.
+
+- **Markdown**: A [`Message`](/data-types#message) containing the uploaded document contents in Markdown format with image placeholders.
+
+- **File Path**: A [`Message`](/data-types#message) containing the path to the file in [Langflow file management](/concepts-file-management).
+
+
+
+
+If you upload multiple files, the component outputs **Files**, which is a [`DataFrame`](/data-types#dataframe) containing the content and metadata of all selected files.
+
+[Advanced parsing](#advanced-parsing) doesn't support multiple files; it processes only the first file.
+
+
+
+
diff --git a/docs/docs/Components/run-flow.mdx b/docs/docs/Components/run-flow.mdx
new file mode 100644
index 000000000000..234ead5b901f
--- /dev/null
+++ b/docs/docs/Components/run-flow.mdx
@@ -0,0 +1,28 @@
+---
+title: Run Flow
+slug: /run-flow
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **Run Flow** component runs another Langflow flow as a subprocess of the current flow.
+
+You can use this component to chain flows together, run flows conditionally, and attach flows to [**Agent** components](/components-agents) as [tools for agents](/agents-tools) to run as needed.
+
+When used with an agent, the `name` and `description` metadata that the agent uses to register the tool are created automatically.
+
+When you select a flow for the **Run Flow** component, it uses the target flow's graph structure to dynamically generate input and output fields on the **Run Flow** component.
+
+## Run Flow parameters
+
+
+
+| Name | Type | Description |
+|-------------------|----------|----------------------------------------------------------------|
+| flow_name_selected| Dropdown | Input parameter. The name of the flow to run. |
+| session_id | String | Input parameter. The session ID for the flow run, if you want to pass a custom session ID for the subflow. |
+| flow_tweak_data | Dict | Input parameter. Dictionary of tweaks to customize the flow's behavior. Available tweaks depend on the selected flow. |
+| dynamic inputs | Various | Input parameter. Additional inputs are generated based on the selected flow. |
+| run_outputs | A `List` of types (`Data`, `Message`, or `DataFrame`) | Output parameter. All outputs are generated from running the flow. |
+
diff --git a/docs/docs/Components/smart-router.mdx b/docs/docs/Components/smart-router.mdx
new file mode 100644
index 000000000000..826d273d493b
--- /dev/null
+++ b/docs/docs/Components/smart-router.mdx
@@ -0,0 +1,59 @@
+---
+title: Smart Router
+slug: /smart-router
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Smart Router** component is an LLM-powered variation of the [**If-Else** component](/if-else).
+Instead of string matching, the **Smart Router** uses a connected [**Language Model** component](/components-models) to categorize and route incoming messages.
+
+You can use the **Smart Router** component anywhere you would use the **If-Else** component.
+For an example, create the [If-Else component example flow](/if-else#use-the-if-else-component-in-a-flow), then replace the **If-Else** component with a **Smart Router** component.
+Instead of a regex, use the **Routes** table to define the outputs for your messages.
+
+The **Routes** table defines the categories for routing.
+For example, a routes table for sentiment analysis might look like this:
+
+| Route Name | Route Description | Route Message |
+|------------|-------------------|---------------|
+| Positive | Positive feedback, satisfaction, or compliments | |
+| Negative | Complaints, issues, or dissatisfaction | |
+| Neutral | Questions, requests for information, or neutral statements | Thank you for your inquiry! |
+
+This component creates ports for the **Positive**, **Negative**, and **Neutral** routes.
+When the LLM categorizes the input text, it routes to the matching category's output port by route name.
+For the Positive and Negative routes, the original input text is passed through.
+For the Neutral route, the `"Thank you for your inquiry!"` route message is sent instead of the input text.
+
+The **Override Output** parameter sends a single message regardless of which route the LLM matches.
+The override message takes precedence over all other output options, and completely replaces both the original input text and any custom route messages.
+For the sentiment analysis example, if you set the **Override Output** to `"Message received"`, all routes send the same message.
+
+The **Additional Instructions** parameter adds extra guidance to the LLM.
+Use the `{input_text}` placeholder to reference the input text being categorized, and `{routes}` to reference the comma-separated list of route names.
+
+For example, to add domain-specific context for the LLM, include the following as the custom prompt:
+
+```
+The text "{input_text}" is from a customer support context.
+Consider the urgency and emotional tone when choosing from {routes}.
+```
+
+## Smart Router parameters
+
+
+
+| Name | Type | Description |
+|---------------------|----------|-------------------------------------------------------------------|
+| Language Model | [LanguageModel](/data-types#languagemodel) | Input parameter. The language model to use for categorization. The LLM receives the input text and available categories, then returns the exact category name that matches. Required. |
+| Input | String | Input parameter. The primary text input for categorization. Required. |
+| Routes | Table | Input parameter. Table defining categories for routing. Each row contains a route name (required), an optional route description to help LLMs understand the category, and an optional custom output message. The component creates one output port for each route category. Required. |
+| Override Output | Message | Input parameter. An optional override message that takes precedence over all other output options. When provided, this message replaces both the original input text and any custom route messages for all routes. Advanced. |
+| Additional Instructions | String | Input parameter. Additional instructions for LLM-based categorization. These are added to the base classification prompt, which already includes the full Routes table (names and descriptions). Use `{input_text}` for the input text and `{routes}` for a comma-separated list of route names only.|
+| Include Else Output | Boolean | Input parameter. Include an Else output for cases that don't match any route. When disabled, no output is produced if no match is found. Default: false. |
+| Else | Message | Output parameter. The Else output. Only available when **Include Else Output** is `true`. Uses the override message (if provided) or the original input text when no route matches. |
\ No newline at end of file
diff --git a/docs/docs/Components/smart-transform.mdx b/docs/docs/Components/smart-transform.mdx
new file mode 100644
index 000000000000..c06fc1a33e4e
--- /dev/null
+++ b/docs/docs/Components/smart-transform.mdx
@@ -0,0 +1,43 @@
+---
+title: Smart Transform
+slug: /smart-transform
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+This component has been renamed multiple times.
+Its previous names include **Lambda Filter** and **Smart Function**.
+
+The **Smart Transform** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
+You must connect this component to a [language model component](/components-models), which is used to generate a function based on the natural language instructions you provide in the **Instructions** parameter.
+The LLM runs the function against the data input, and then outputs the results as [`Data`](/data-types#data).
+
+:::tip
+Provide brief, clear instructions, focusing on the desired outcome or specific actions, such as `Filter the data to only include items where the 'status' is 'active'`.
+One sentence or less is preferred because end punctuation, like periods, can cause errors or unexpected behavior.
+
+If you need to provide more details instructions that aren't directly relevant to the Lambda function, you can input them in the **Language Model** component's **Input** field or through a **Prompt Template** component.
+:::
+
+The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Transform** component.
+Then, the **Smart Transform** component passes the data and the instruction `extract emails` to the attached **Language Model** component.
+From there, the LLM generates a filter function that extracts email addresses from the JSON data, returning the filtered data as chat output.
+
+
+
+## Smart Transform parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| data | Data | Input parameter. The structured data to filter or transform using a Lambda function. |
+| llm | Language Model | Input parameter. Connect [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component. |
+| filter_instruction | Instructions | Input parameter. The natural language instructions for how to filter or transform the data. The LLM uses these instructions to create a Lambda function. |
+| sample_size | Sample Size | Input parameter. For large datasets, the number of characters to sample from the dataset head and tail. Only applied if the dataset meets or exceeds `max_size`. Default: `1000`. |
+| max_size | Max Size | Input parameter. The number of characters for the dataset to be considered large, which triggers sampling by the `sample_size` value. Default: `30000`. |
+
diff --git a/docs/docs/Components/split-text.mdx b/docs/docs/Components/split-text.mdx
new file mode 100644
index 000000000000..fccaada9b89e
--- /dev/null
+++ b/docs/docs/Components/split-text.mdx
@@ -0,0 +1,50 @@
+---
+title: Split Text
+slug: /split-text
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **Split Text** component splits data into chunks based on parameters like chunk size and separator.
+It is often used to chunk data to be tokenized and embedded into vector databases.
+For examples, see [Use embedding model components in a flow](/components-embedding-models#use-embedding-model-components-in-a-flow) and [Create a Vector RAG chatbot](/chat-with-rag).
+
+
+
+The component accepts `Message`, `Data`, or `DataFrame`, and then outputs either **Chunks** or **DataFrame**.
+The **Chunks** output returns a list of [`Data`](/data-types#data) objects containing individual text chunks.
+The **DataFrame** output returns the list of chunks as a structured [`DataFrame`](/data-types#dataframe) with additional `text` and `metadata` columns.
+
+## Split Text parameters
+
+The **Split Text** component's parameters control how the text is split into chunks, specifically the `chunk_size`, `chunk_overlap`, and `separator` parameters.
+
+To test the chunking behavior, add a **Text Input** or **Read File** component with some sample data to chunk, click **Run component** on the **Split Text** component, and then click **Inspect output** to view the list of chunks and their metadata. The **text** column contains the actual text chunks created from your chunking settings.
+If the chunks aren't split as you expect, adjust the parameters, rerun the component, and then inspect the new output.
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| data_inputs | Input | Input parameter. The data to split. Input must be in `Message`, `Data`, or `DataFrame` format. |
+| chunk_overlap | Chunk Overlap | Input parameter. The number of characters to overlap between chunks. This helps maintain context across chunks. When a separator is encountered, the overlap is applied at the point of the separator so that the subsequent chunk contains the last _n_ characters of the preceding chunk. Default: `200`. |
+| chunk_size | Chunk Size | Input parameter. The target length for each chunk after splitting. The data is first split by separator, and then chunks smaller than the `chunk_size` are merged up to this limit. However, if the initial separator split produces any chunks larger than the `chunk_size`, those chunks are neither further subdivided nor combined with any smaller chunks; these chunks will be output as-is even though they exceed the `chunk_size`. Default: `1000`. See [Tokenization errors due to chunk size](#chunk-size) for important considerations. |
+| separator | Separator | Input parameter. A string defining a character to split on, such as `\n` to split on new line characters, `\n\n` to split at paragraph breaks, or `},` to split at the end of JSON objects. You can directly provide the separator string, or pass a separator string from another component as `Message` input. |
+| text_key | Text Key | Input parameter. The key to use for the text column that is extracted from the input and then split. Default: `text`. |
+| keep_separator | Keep Separator | Input parameter. Select how to handle separators in output chunks. If `False`, separators are omitted from output chunks. Options include `False` (remove separators), `True` (keep separators in chunks without preference for placement), `Start` (place separators at the beginning of chunks), or `End` (place separators at the end of chunks). Default: `False`. |
+
+### Tokenization errors due to chunk size {#chunk-size}
+
+When using **Split Text** with embedding models (especially NVIDIA models like `nvidia/nv-embed-v1`), you may need to use smaller chunk sizes (`500` or less) even though the model supports larger token limits.
+The **Split Text** component doesn't always enforce the exact chunk size you set, and individual chunks may exceed your specified limit.
+If you encounter tokenization errors, modify your text splitting strategy by reducing the chunk size, changing the overlap length, or using a more common separator.
+Then, test your configuration by running the flow and inspecting the component's output.
+
+### Other text splitters
+
+See [LangChain text splitter components](/bundles-langchain#text-splitters).
+
diff --git a/docs/docs/Components/sql-database.mdx b/docs/docs/Components/sql-database.mdx
new file mode 100644
index 000000000000..f0c839dedd4c
--- /dev/null
+++ b/docs/docs/Components/sql-database.mdx
@@ -0,0 +1,127 @@
+---
+title: SQL Database
+slug: /sql-database
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **SQL Database** component executes SQL queries on [SQLAlchemy-compatible databases](https://docs.sqlalchemy.org/en/20/).
+It supports any SQLAlchemy-compatible database, such as PostgreSQL, MySQL, and SQLite.
+
+For CQL queries, see the [**DataStax** bundle](/bundles-datastax).
+
+## Query an SQL database with natural language prompts
+
+The following example demonstrates how to use the **SQL Database** component in a flow, and then modify the component to support natural language queries through an **Agent** component.
+
+This allows you to use the same **SQL Database** component for any query, rather than limiting it to a single manually entered query or requiring the user, application, or another component to provide valid SQL syntax as input.
+Users don't need to master SQL syntax because the **Agent** component translates the users' natural language prompts into SQL queries, passes the query to the **SQL Database** component, and then returns the results to the user.
+
+Additionally, input from applications and other components doesn't have to be extracted and transformed to exact SQL queries.
+Instead, you only need to provide enough context for the agent to understand that it should create and run a SQL query according to the incoming data.
+
+1. Use your own sample database or create a test database.
+
+
+ Create a test SQL database
+
+ 1. Create a database called `test.db`:
+
+ ```shell
+ sqlite3 test.db
+ ```
+
+ 2. Add some values to the database:
+
+ ```shell
+ sqlite3 test.db "
+ CREATE TABLE users (
+ id INTEGER PRIMARY KEY,
+ name TEXT,
+ email TEXT,
+ age INTEGER
+ );
+
+ INSERT INTO users (name, email, age) VALUES
+ ('John Doe', 'john@example.com', 30),
+ ('Jane Smith', 'jane@example.com', 25),
+ ('Bob Johnson', 'bob@example.com', 35);
+ "
+ ```
+
+ 3. Verify that the database has been created and contains your data:
+
+ ```shell
+ sqlite3 test.db "SELECT * FROM users;"
+ ```
+
+ The result should list the text data you entered in the previous step:
+
+ ```shell
+ 1|John Doe|john@example.com
+ 2|Jane Smith|jane@example.com
+ 3|John Doe|john@example.com
+ 4|Jane Smith|jane@example.com
+ ```
+
+
+
+2. Add an **SQL Database** component to your flow.
+
+3. In the **Database URL** field, add the connection string for your database, such as `sqlite:///test.db`.
+
+ At this point, you can enter an SQL query in the **SQL Query** field or use the [port](/concepts-components#component-ports) to pass a query from another component, such as a **Chat Input** component.
+ If you need more space, click **Expand** to open a full-screen text field.
+
+ However, to make this component more dynamic in an agentic context, use an **Agent** component to transform natural language input to SQL queries, as explained in the following steps.
+
+4. Click the **SQL Database** component to expose the [component's header menu](/concepts-components#component-menus), and then enable **Tool Mode**.
+
+ You can now use this component as a tool for an agent.
+ In **Tool Mode**, no query is set in the **SQL Database** component because the agent will generate and send one if it determines that the tool is required to complete the user's request.
+ For more information, see [Configure tools for agents](/agents-tools).
+
+5. Add an **Agent** component to your flow, and then enter your OpenAI API key.
+
+ The default model is an OpenAI model.
+ If you want to use a different model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
+
+ If you need to execute highly specialized queries, consider selecting a model that is trained for tasks like advanced SQL queries.
+ If your preferred model isn't in the **Agent** component's built-in model list, set **Model Provider** to **Connect other models**, and then connect any [language model component](/components-models).
+
+6. Connect the **SQL Database** component's **Toolset** output to the **Agent** component's **Tools** input.
+
+ 
+
+7. Click **Playground**, and then ask the agent a question about the data in your database, such as `Which users are in my database?`
+
+ The agent determines that it needs to query the database to answer the question, uses the LLM to generate an SQL query, and then uses the **SQL Database** component's `RUN_SQL_QUERY` action to run the query on your database.
+ Finally, it returns the results in a conversational format, unless you provide instructions to return raw results or a different format.
+
+ The following example queried a test database with little data, but with a more robust dataset you could ask more detailed or complex questions.
+
+ ```text
+ Here are the users in your database:
+
+ 1. **John Doe** - Email: john@example.com
+ 2. **Jane Smith** - Email: jane@example.com
+ 3. **John Doe** - Email: john@example.com
+ 4. **Jane Smith** - Email: jane@example.com
+
+ It seems there are duplicate entries for the users.
+ ```
+
+## SQL Database parameters
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| database_url | Database URL | Input parameter. The SQLAlchemy-compatible database connection URL. |
+| query | SQL Query | Input parameter. The SQL query to execute, which can be entered directly, passed in from another component, or, in **Tool Mode**, automatically provided by an **Agent** component. |
+| include_columns | Include Columns | Input parameter. Whether to include column names in the result. The default is enabled (`true`). |
+| add_error | Add Error | Input parameter. If enabled, adds any error messages to the result, if any are returned. The default is disabled (`false`). |
+| run_sql_query | Result Table | Output parameter. The query results as a [`DataFrame`](/data-types#dataframe). |
\ No newline at end of file
diff --git a/docs/docs/Components/structured-output.mdx b/docs/docs/Components/structured-output.mdx
new file mode 100644
index 000000000000..c43c6799ef87
--- /dev/null
+++ b/docs/docs/Components/structured-output.mdx
@@ -0,0 +1,120 @@
+---
+title: Structured Output
+slug: /structured-output
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Structured Output** component uses an LLM to transform any input into structured data (`Data` or `DataFrame`) using natural language formatting instructions and an output schema definition.
+For example, you can extract specific details from documents, like email messages or scientific papers.
+
+## Use the Structured Output component in a flow
+
+To use the **Structured Output** component in a flow, do the following:
+
+1. Provide an **Input Message**, which is the source material from which you want to extract structured data.
+This can come from practically any component, but it is typically a **Chat Input**, **Read File**, or other component that provides some unstructured or semi-structured input.
+
+ :::tip
+ Not all source material has to become structured output.
+ The power of the **Structured Output** component is that you can specify the information you want to extract, even if that data isn't explicitly labeled or an exact keyword match.
+ Then, the LLM can use your instructions to analyze the source material, extract the relevant data, and format it according to your specifications.
+ Any irrelevant source material isn't included in the structured output.
+ :::
+
+2. Define **Format Instructions** and an **Output Schema** to specify the data to extract from the source material and how to structure it in the final `Data` or `DataFrame` output.
+
+ The instructions are a prompt that tell the LLM what data to extract, how to format it, how to handle exceptions, and any other instructions relevant to preparing the structured data.
+
+ The schema is a table that defines the fields (keys) and data types to organize the data extracted by the LLM into a structured `Data` or `DataFrame` object.
+ For more information, see [Output Schema options](#output-schema-options)
+
+3. Attach a [language model component](/components-models) that is set to emit [`LanguageModel`](/data-types#languagemodel) output.
+
+ The LLM uses the **Input Message** and **Format Instructions** from the **Structured Output** component to extract specific pieces of data from the input text.
+ The output schema is applied to the model's response to produce the final `Data` or `DataFrame` structured object.
+
+4. Optional: Typically, the structured output is passed to downstream components that use the extracted data for other processes, such as the **Parser** or **Data Operations** components.
+
+
+
+
+Structured Output example: Financial Report Parser template
+
+The **Financial Report Parser** template provides an example of how the **Structured Output** component can be used to extract structured data from unstructured text.
+
+The template's **Structured Output** component has the following configuration:
+
+* The **Input Message** comes from a **Chat Input** component that is preloaded with quotes from sample financial reports
+
+* The **Format Instructions** are as follows:
+
+ ```text
+ You are an AI that extracts structured JSON objects from unstructured text.
+ Use a predefined schema with expected types (str, int, float, bool, dict).
+ Extract ALL relevant instances that match the schema - if multiple patterns exist, capture them all.
+ Fill missing or ambiguous values with defaults: null for missing values.
+ Remove exact duplicates but keep variations that have different field values.
+ Always return valid JSON in the expected format, never throw errors.
+ If multiple objects can be extracted, return them all in the structured format.
+ ```
+
+* The **Output Schema** includes keys for `EBITDA`, `NET_INCOME`, and `GROSS_PROFIT`.
+
+The structured `Data` object is passed to a **Parser** component that produces a text string by mapping the schema keys to variables in the parsing template:
+
+```text
+EBITDA: {EBITDA} , Net Income: {NET_INCOME} , GROSS_PROFIT: {GROSS_PROFIT}
+```
+
+When printed to the **Playground**, the resulting `Message` replaces the variables with the actual values extracted by the **Structured Output** component. For example:
+
+```text
+EBITDA: 900 million , Net Income: 500 million , GROSS_PROFIT: 1.2 billion
+```
+
+
+
+## Structured Output parameters
+
+
+
+| Name | Type | Description |
+|------|------|-------------|
+| Language Model (`llm`) | `LanguageModel` | Input parameter. The [`LanguageModel`](/data-types#languagemodel) output from a **Language Model** component that defines the LLM to use to analyze, extract, and prepare the structured output. |
+| Input Message (`input_value`) | String | Input parameter. The input message containing source material for extraction. |
+| Format Instructions (`system_prompt`) | String | Input parameter. The instructions to the language model for extracting and formatting the output. |
+| Schema Name (`schema_name`) | String | Input parameter. An optional title for the **Output Schema**. |
+| Output Schema (`output_schema`)| Table | Input parameter. A table describing the schema of the desired structured output, ultimately determining the content of the `Data` or `DataFrame` output. See [Output Schema options](#output-schema-options). |
+| Structured Output (`structured_output`) | `Data` or `DataFrame` | Output parameter. The final structured output produced by the component. Near the component's output port, you can select the output data type as either **Structured Output Data** or **Structured Output DataFrame**. The specific content and structure of the output depends on the input parameters. |
+
+#### Output Schema options {#output-schema-options}
+
+After the LLM extracts the relevant data from the **Input Message** and **Format Instructions**, the data is organized according to the **Output Schema**.
+
+The schema is a table that defines the fields (keys) and data types for the final `Data` or `DataFrame` output from the **Structured Output** component.
+
+The default schema is a single `field` string.
+
+To add a key to the schema, click **Add a new row**, and then edit each column to define the schema:
+
+* **Name**: The name of the output field. Typically a specific key for which you want to extract a value.
+
+ You can reference these keys as variables in downstream components, such as a **Parser** component's template.
+ For example, the schema key `NET_INCOME` could be referenced by the variable `{NET_INCOME}`.
+
+* **Description**: An optional metadata description of the field's contents and purpose.
+
+* **Type**: The data type of the value stored in the field.
+Supported types are `str` (default), `int`, `float`, `bool`, and `dict`.
+
+* **As List**: Enable this setting if you want the field to contain a list of values rather than a single value.
+
+For simple schemas, you might only extract a few `string` or `int` fields.
+For more complex schemas with lists and dictionaries, it might help to refer to the `Data` and `DataFrame` structures and attributes, as described in [Langflow data types](/data-types).
+You can also emit a rough `Data` or `DataFrame`, and then use downstream components for further refinement, such as a **Data Operations** component.
+
diff --git a/docs/docs/Components/text-input-and-output.mdx b/docs/docs/Components/text-input-and-output.mdx
new file mode 100644
index 000000000000..09a52f556cc5
--- /dev/null
+++ b/docs/docs/Components/text-input-and-output.mdx
@@ -0,0 +1,36 @@
+---
+title: Text Input and Output
+slug: /text-input-and-output
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+:::warning
+**Text Input and Output** components aren't supported in the **Playground**.
+Because the data isn't formatted as a chat message, the data doesn't appear in the **Playground**, and you can't chat with your flow in the **Playground**.
+
+If you want to chat with a flow in the **Playground**, you must use the [**Chat Input and Output** components](/chat-input-and-output).
+:::
+
+**Text Input and Output** components are designed for flows that ingest or emit simple text strings.
+These components don't support full conversational interactions.
+
+Passing chat-like metadata to a **Text Input and Output** component doesn't change the component's behavior; the result is still a simple text string.
+
+## Text Input
+
+The **Text Input** component accepts a text string input that is passed to other components as [`Message` data](/data-types) containing only the provided input text string in the `text` attribute.
+
+It accepts only **Text** (`input_value`), which is the text supplied as input to the component.
+This can be entered directly into the component or passed as `Message` data from other components.
+
+Initial input _shouldn't_ be provided as a complete `Message` object because the **Text Input** component constructs the `Message` object that is then passed to other components in the flow.
+
+## Text Output
+
+The **Text Output** component ingests [`Message` data](/data-types#message) from other components, emitting only the `text` attribute in a simplified `Message` object.
+
+It accepts only **Text** (`input_value`), which is the text to be ingested and output as a string.
+This can be entered directly into the component or passed as `Message` data from other components.
+
diff --git a/docs/docs/Components/type-convert.mdx b/docs/docs/Components/type-convert.mdx
new file mode 100644
index 000000000000..1c3d05df6d53
--- /dev/null
+++ b/docs/docs/Components/type-convert.mdx
@@ -0,0 +1,138 @@
+---
+title: Type Convert
+slug: /type-convert
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialCurlyBraces from '@site/docs/_partial-escape-curly-braces.mdx';
+
+The **Type Convert** component converts data from one type to another.
+It supports `Data`, `DataFrame`, and `Message` data types.
+
+
+
+
+A `Data` object is a structured object that contains a primary `text` key and other key-value pairs:
+
+```json
+"data": {
+ "text": "User Profile",
+ "name": "Charlie Lastname",
+ "age": 28,
+ "email": "charlie.lastname@example.com"
+},
+```
+
+The larger context associated with a component's `data` dictionary also identifies which key is the primary `text_key`, and it can provide an optional default value if the primary key isn't specified.
+For example:
+
+```json
+{
+ "text_key": "text",
+ "data": {
+ "text": "User Profile",
+ "name": "Charlie Lastname",
+ "age": 28,
+ "email": "charlie.lastname@example.com"
+ },
+ "default_value": ""
+}
+```
+
+
+
+
+A `DataFrame` is an array that represents a tabular data structure with rows and columns.
+
+It consists of a list (array) of dictionary objects, where each dictionary represents a row.
+Each key in the dictionaries corresponds to a column name.
+For example, the following `DataFrame` contains two rows with columns for `name`, `age`, and `email`:
+
+```json
+[
+ {
+ "name": "Charlie Lastname",
+ "age": 28,
+ "email": "charlie.lastname@example.com"
+ },
+ {
+ "name": "Bobby Othername",
+ "age": 25,
+ "email": "bobby.othername@example.com"
+ }
+]
+```
+
+
+
+
+A `Message` is primarily for passing a `text` string, such as`"Name: Charlie Lastname, Age: 28, Email: charlie.lastname@example.com"`.
+However, the entire `Message` object can include metadata about the message, particularly when used as chat input or output.
+
+
+
+
+For more information, see [Langflow data types](/data-types).
+
+## Use the Type Convert component in a flow
+
+The **Type Convert** component is typically used to transform data into a format required by a downstream component.
+For example, if a component outputs a `Message`, but the following component requires `Data`, then you can use the **Type Convert** component to reformat the `Message` as `Data` before passing it to the downstream component.
+
+The following example uses the **Type Convert** component to convert the `DataFrame` output from a **Web Search** component into `Message` data that is passed as text input for an LLM:
+
+1. Create a flow based on the **Basic prompting** template.
+
+2. Add a **Web Search** component to the flow, and then enter a search query, such as `environmental news`.
+
+3. In the **Prompt Template** component, replace the contents of the **Template** field with the following text:
+
+ ```text
+ Answer the user's question using the {context}
+ ```
+
+ The curly braces define a [prompt variable](/components-prompts#define-variables-in-prompts) that becomes an input field on the **Prompt Template** component.
+ In this example, you will use the **context** field to pass the search results into the template, as explained in the next steps.
+
+3. Add a **Type Convert** component to the flow, and then set the **Output Type** to **Message**.
+
+ Because the **Web Search** component's `DataFrame` output is incompatible with the **context** variable's `Message` input, you must use the **Type Convert** component to change the `DataFrame` to a `Message` in order to pass the search results to the **Prompt Template** component.
+
+4. Connect the additional components to the rest of the flow:
+
+ * Connect the **Web Search** component's output to the **Type Convert** component's input.
+ * Connect the **Type Convert** component's output to the **Prompt Template** component's **context** input.
+
+ 
+
+5. In the **Language Model** component, add your OpenAI API key.
+
+ If you want to use a different provider or model, edit the **Model Provider**, **Model Name**, and **API Key** fields accordingly.
+
+6. Click **Playground**, and then ask something relevant to your search query, such as `latest news` or `what's the latest research on the environment?`.
+
+
+ Result
+
+ The LLM uses the search results context, your chat message, and it's built-in training data to respond to your question.
+ For example:
+
+ ```text
+ Here are some of the latest news articles related to the environment:
+ Ozone Pollution and Global Warming: A recent study highlights that ozone pollution is a significant global environmental concern, threatening human health and crop production while exacerbating global warming. Read more
+ ...
+ ```
+
+
+
+## Type Convert parameters
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| input_data | Input Data | Input parameter. The data to convert. Accepts `Data`, `DataFrame`, or `Message` input. |
+| output_type | Output Type | Input parameter. The desired output type, as one of **Data**, **DataFrame** or **Message**. |
+| output | Output | Output parameter. The converted data in the specified format. The output port changes depending on the selected **Output Type**. |
+
diff --git a/docs/docs/Components/url.mdx b/docs/docs/Components/url.mdx
new file mode 100644
index 000000000000..67f527d0810d
--- /dev/null
+++ b/docs/docs/Components/url.mdx
@@ -0,0 +1,55 @@
+---
+title: URL
+slug: /url
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **URL** component fetches content from one or more URLs, processes the content, and returns it in various formats.
+It follows links recursively to a given depth, and it supports output in plain text or raw HTML.
+
+## URL parameters
+
+
+
+Some of the available parameters include the following:
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| urls | URLs | Input parameter. One or more URLs to crawl recursively. In the visual editor, click **Add URL** to add multiple URLs. |
+| max_depth | Depth | Input parameter. Controls link traversal: how many "clicks" away from the initial page the crawler will go. A depth of 1 limits the crawl to the first page at the given URL only. A depth of 2 means the crawler crawls the first page plus each page directly linked from the first page, then stops. This setting exclusively controls link traversal; it doesn't limit the number of URL path segments or the domain. |
+| prevent_outside | Prevent Outside | Input parameter. If enabled, only crawls URLs within the same domain as the root URL. This prevents the crawler from accessing sites outside the given URL's domain, even if they are linked from one of the crawled pages. |
+| use_async | Use Async | Input parameter. If enabled, uses asynchronous loading which can be significantly faster but might use more system resources. |
+| format | Output Format | Input parameter. Sets the desired output format as **Text** or **HTML**. The default is **Text**. For more information, see [URL output](#url-output).|
+| timeout | Timeout | Input parameter. Timeout for the request in seconds. |
+| headers | Headers | Input parameter. The headers to send with the request if needed for authentication or otherwise. |
+
+Additional input parameters are available for error handling and encoding.
+
+## URL output
+
+There are two settings that control the output of the **URL** component at different stages:
+
+* **Output Format**: This optional parameter controls the content extracted from the crawled pages:
+
+ * **Text (default)**: The component extracts only the text from the HTML of the crawled pages.
+ * **HTML**: The component extracts the entire raw HTML content of the crawled pages.
+
+* **Output data type**: In the component's output field (near the output port) you can select the structure of the outgoing data when it is passed to other components:
+
+ * **Extracted Pages**: Outputs a [`DataFrame`](/data-types#dataframe) that breaks the crawled pages into columns for the entire page content (`text`) and metadata like `url` and `title`.
+ * **Raw Content**: Outputs a [`Message`](/data-types#message) containing the entire text or HTML from the crawled pages, including metadata, in a single block of text.
+
+When used as a standard component in a flow, the **URL** component must be connected to a component that accepts the selected output data type (`DataFrame` or `Message`).
+You can connect the **URL** component directly to a compatible component, or you can use a [**Type Convert** component](/type-convert) to convert the output to another type before passing the data to other components if the data types aren't directly compatible.
+
+Processing components like the **Type Convert** component are useful with the **URL** component because it can extract a large amount of data from the crawled pages.
+For example, if you only want to pass specific fields to other components, you can use a [**Parser** component](/parser) to extract only that data from the crawled pages before passing the data to other components.
+
+When used in **Tool Mode** with an **Agent** component, the **URL** component can be connected directly to the **Agent** component's **Tools** port without converting the data.
+The agent decides whether to use the **URL** component based on the user's query, and it can process the `DataFrame` or `Message` output directly.
+
diff --git a/docs/docs/Components/web-search.mdx b/docs/docs/Components/web-search.mdx
new file mode 100644
index 000000000000..a33bc7dad323
--- /dev/null
+++ b/docs/docs/Components/web-search.mdx
@@ -0,0 +1,139 @@
+---
+title: Web Search
+slug: /web-search
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+The **Web Search** component consolidates the **Web Search**, **News Search**, and **RSS Reader** components into a single component with tabs for different search modes. You can search the web using DuckDuckGo, search Google News, or read RSS feeds, all from one component.
+
+For other search APIs, see [**Bundles**](/components-bundle-components).
+
+:::info
+The **Web Search** component uses web scraping that can be subject to rate limits.
+
+For production use, consider using another search component with more robust API support, such as provider-specific bundles.
+:::
+
+## Use the Web Search component in a flow
+
+The following steps demonstrate one way that you can use a **Web Search** component in a flow:
+
+1. Create a flow based on the **Basic Prompting** template.
+
+2. Add a **Web Search** component, select your desired **Search Mode** (Web, News, or RSS), and then enter a search query or RSS feed URL.
+
+3. Add a [**Type Convert** component](/type-convert), set the **Output Type** to **Message**, and then connect the **Web Search** component's output to the **Type Convert** component's input.
+
+ By default, the **Web Search** component outputs a `DataFrame`.
+ Because the **Prompt Template** component only accepts `Message` data, this conversion is required so that the flow can pass the search results to the **Prompt Template** component.
+ For more information, see [Web Search output](#web-search-output).
+
+4. In the **Prompt Template** component's **Template** field, add a variable like `{searchresults}` or `{context}`.
+
+ This adds a field to the **Prompt Template** component that you can use to pass the converted search results to the prompt.
+ For more information, see [Define variables in prompts](/components-prompts#define-variables-in-prompts).
+
+5. Connect the **Type Convert** component's output to the new variable field on the **Prompt Template** component.
+
+ 
+
+6. In the **Language Model** component, add your OpenAI API key, or select a different provider and model.
+
+7. Click **Playground**, and then enter your query.
+
+ The LLM processes the request, including the context passed through the **Prompt Template** component, and then prints the response in the **Playground** chat interface.
+
+
+ Result
+
+ The following is an example of a possible response.
+ Your response may vary based on the current state of the web, your specific query, the model, and other factors.
+
+ ```text
+ Here are some of the latest news articles related to the environment:
+ Ozone Pollution and Global Warming: A recent study highlights that ozone pollution is a significant global environmental concern, threatening human health and crop production while exacerbating global warming. Read more
+ ...
+ ```
+
+
+
+## Parameters
+
+
+
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| search_mode | Search Mode | Input parameter. Choose search mode: Web (DuckDuckGo), News (Google News), or RSS (Feed Reader). Default: `Web`. |
+| query | Search Query | Input parameter. Keywords to search for. |
+| timeout | Timeout | Input parameter. Timeout for the web search request in seconds. Default: `5`. |
+| results | Results | Output parameter. Returns a `DataFrame` containing `title`, `link`, `snippet`, and `content`. For more information, see [Web Search output](#web-search-output). |
+
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| search_mode | Search Mode | Input parameter. Choose search mode: Web (DuckDuckGo), News (Google News), or RSS (Feed Reader). Default: `Web`. |
+| query | Search Query | Input parameter. Search keywords for news articles. |
+| hl | Language (hl) | Input parameter. Language code, such as en-US, fr, de. Default: `en-US`. |
+| gl | Country (gl) | Input parameter. Country code, such as US, FR, DE. Default: `US`. |
+| ceid | Country:Language (ceid) | Input parameter. Language, such as US:en, FR:fr. Default: `US:en`. |
+| topic | Topic | Input parameter. One of: `WORLD`, `NATION`, `BUSINESS`, `TECHNOLOGY`, `ENTERTAINMENT`, `SCIENCE`, `SPORTS`, `HEALTH`. |
+| location | Location (Geo) | Input parameter. City, state, or country for location-based news. Leave blank for keyword search. |
+| timeout | Timeout | Input parameter. Timeout for the request in seconds. Default: `5`. |
+| results | Results | Output parameter. A `DataFrame` with the key columns `title`, `link`, `published` and `summary`. For more information, see [Web Search output](#web-search-output). |
+
+
+
+
+| Name | Display Name | Info |
+|------|--------------|------|
+| search_mode | Search Mode | Input parameter. Choose search mode: Web (DuckDuckGo), News (Google News), or RSS (Feed Reader). Default: `Web`. |
+| query | RSS Feed URL | Input parameter. URL of the RSS feed to parse, such as `https://rss.nytimes.com/services/xml/rss/nyt/HomePage.xml`. |
+| timeout | Timeout | Input parameter. Timeout for the RSS feed request in seconds. Default: `5`. |
+| results | Results | Output parameter. A `DataFrame` containing the key columns `title`, `link`, `published` and `summary`. For more information, see [Web Search output](#web-search-output). |
+
+
+
+
+## Web Search output
+
+The **Web Search** component outputs a [`DataFrame`](/data-types#dataframe) with different columns depending on the search mode.
+
+
+
+
+When using **Web** search mode, the component returns a `DataFrame` containing:
+- `title`: The title of the search result
+- `link`: The URL of the search result
+- `snippet`: A brief snippet from the search result
+- `content`: The full content of the page (when successfully fetched)
+
+
+
+
+When using **News** search mode, the component returns a `DataFrame` containing:
+- `title`: The title of the news article
+- `link`: The URL of the news article
+- `published`: The publication date of the article
+- `summary`: A summary or description of the article
+
+
+
+
+When using **RSS** search mode, the component returns a `DataFrame` containing:
+- `title`: The title of the RSS feed item
+- `link`: The URL of the RSS feed item
+- `published`: The publication date of the item
+- `summary`: A summary or description of the item
+
+
+
diff --git a/docs/docs/Components/webhook.mdx b/docs/docs/Components/webhook.mdx
new file mode 100644
index 000000000000..a1f9c5d22bb6
--- /dev/null
+++ b/docs/docs/Components/webhook.mdx
@@ -0,0 +1,35 @@
+---
+title: Webhook
+slug: /component-webhook
+---
+
+import Icon from "@site/src/components/icon";
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+
+The **Webhook** component defines a webhook trigger that runs a flow when it receives an HTTP POST request.
+
+## Trigger the webhook
+
+When you add a **Webhook** component to your flow, a **Webhook curl** tab is added to the flow's [**API Access** pane](/concepts-publish#api-access).
+This tab automatically generates an HTTP POST request code snippet that you can use to trigger your flow through the **Webhook** component.
+For example:
+
+```bash
+curl -X POST \
+ "http://$LANGFLOW_SERVER_ADDRESS/api/v1/webhook/$FLOW_ID" \
+ -H 'Content-Type: application/json' \
+ -H 'x-api-key: $LANGFLOW_API_KEY' \
+ -d '{"any": "data"}'
+```
+
+For more information, see [Trigger flows with webhooks](/webhook).
+
+## Webhook parameters
+
+| Name | Display Name | Description |
+|------|--------------|-------------|
+| data | Payload | Input parameter. Receives a payload from external systems through HTTP POST requests. |
+| curl | curl | Input parameter. The curl command template for making requests to this webhook. |
+| endpoint | Endpoint | Input parameter. The endpoint URL where this webhook receives requests. |
+| output_data | Data | Output parameter. The processed data from the webhook input. Returns an empty [`Data`](/data-types#data) object if no input is provided. If the input isn't valid JSON, the **Webhook** component wraps it in a `payload` object so that it can be accepted as input to trigger the flow. |
+
diff --git a/docs/docs/Components/write-file.mdx b/docs/docs/Components/write-file.mdx
new file mode 100644
index 000000000000..b2030d717a04
--- /dev/null
+++ b/docs/docs/Components/write-file.mdx
@@ -0,0 +1,73 @@
+---
+title: Write File
+slug: /write-file
+---
+
+import Icon from "@site/src/components/icon";
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import PartialParams from '@site/docs/_partial-hidden-params.mdx';
+import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
+
+In Langflow version 1.7.0, this component was renamed from **Save File** to **Write File**.
+
+The **Write File** component creates a file containing data produced by another component.
+Several file formats are supported, and you can store files in [Langflow storage](/memory), AWS S3, Google Drive, or the local file system.
+
+To configure the **Write File** component and use it in a flow, do the following:
+
+1. Connect [`DataFrame`](/data-types#dataframe), [`Data`](/data-types#data), or [`Message`](/data-types#message) output from another component to the **Write File** component's **Input** port.
+
+ You can connect the same output to multiple **Write File** components if you want to create multiple files, save the data in different file formats, or save files to multiple locations.
+
+2. In **File Name**, enter a file name and an optional path.
+
+ The **File Name** parameter controls where the file is saved.
+ It can contain a file name or an entire file path:
+
+ * **Default location**: If you only provide a file name, then the file is stored in the Langflow data directory. For example,`~/Library/Caches/langflow/data` on macOS.
+
+ * **Subdirectory**: To store files in subdirectories, add the path to the **File Name** parameter.
+ If a given subdirectory doesn't already exist, Langflow automatically creates it.
+ For example, `files/my_file` creates `my_file` in `/data/files`, and it creates the `files` subdirectory if it doesn't already exist.
+
+ * **Absolute or relative path**: To store files elsewhere in your environment or local file storage, provide the absolute or relative path to the desired location.
+ For example, `~/Desktop/my_file` saves `my_file` to the desktop.
+
+ Don't include an extension in the file name.
+ If you do, the extension is treated as part of the file name; it has no impact on the **File Format** parameter.
+
+3. In the [component's header menu](/concepts-components#component-menus), click **Controls**, select the desired file format, and then click **Close**.
+
+ The available **File Format** options depend on the input data type:
+
+ * `DataFrame` can be saved to CSV (default), Excel (requires `openpyxl` [custom dependency](/install-custom-dependencies)), JSON (fallback default), or Markdown.
+
+ * `Data` can be saved to CSV, Excel (requires `openpyxl` [custom dependency](/install-custom-dependencies)), JSON (default), or Markdown.
+
+ * `Message` can be saved to TXT, JSON (default), or Markdown.
+
+ :::warning Overwrites allowed
+ If you have multiple **Write File** components, in one or more flows, with the same file name, path, and extension, the file contains the data from the most recent run only.
+ Langflow doesn't block overwrites if a matching file already exists.
+ To avoid unintended overwrites, use unique file names and paths.
+ :::
+
+4. To test the **Write File** component, click **Run component**, and then click **Inspect output** to get the filepath where the file was saved.
+
+ The component's literal output is a `Message` containing the original data type, the file name and extension, and the absolute filepath to the file based on the **File Name** parameter.
+ For example:
+
+ ```text
+ DataFrame saved successfully as 'my_file.csv' at /Users/user.name/Library/Caches/langflow/data/my_file.csv
+ ```
+
+ If the **File Name** contains a subdirectory or other non-default path, this is reflected in the `Message` output.
+ For example, a CSV file with the file name `~/Desktop/my_file` could produce the following output:
+
+ ```text
+ DataFrame saved successfully as '/Users/user.name/Desktop/my_file.csv' at /Users/user.name/Desktop/my_file.csv
+ ```
+
+5. Optional: If you want to use the saved file in a flow, you must use an API call or another component to retrieve the file from the given filepath.
+
diff --git a/docs/docs/Contributing/contributing-bundles.mdx b/docs/docs/Contributing/contributing-bundles.mdx
index 2b0f9bea1c99..183389136861 100644
--- a/docs/docs/Contributing/contributing-bundles.mdx
+++ b/docs/docs/Contributing/contributing-bundles.mdx
@@ -1,5 +1,5 @@
---
-title: Contribute bundles
+title: Contribute component bundles
slug: /contributing-bundles
---
@@ -11,24 +11,26 @@ If you want to contribute your custom components back to the Langflow project, y
Follow these steps to add components to **Bundles** in the Langflow visual editor.
This example adds a bundle named `DarthVader`.
-## Add the bundle to the backend folder
+For more information on creating custom components, see [Create custom Python components](/components-custom-components).
-1. Navigate to the backend directory in the Langflow repository and create a new folder for your bundle.
-The path for your new component is `src > backend > base > langflow > components > darth_vader`.
-You can view the [components folder](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/components) in the Langflow repository.
+## Add the bundle to the lfx components folder
+
+1. Navigate to the lfx directory in the Langflow repository and create a new folder for your bundle.
+The path for your new component is `src/lfx/src/lfx/components/darth_vader`.
+You can view the [components folder](https://github.com/langflow-ai/langflow/tree/main/src/lfx/src/lfx/components) in the Langflow repository.
2. Within the newly created `darth_vader` folder, add the following files:
-* `darth_vader_component.py` — This file contains the backend logic for the new bundle. Create multiple `.py` files for multiple components.
-* `__init__.py` — This file initializes the bundle components. You can use any existing `__init__.py` as an example to see how it should be structured.
+ * `darth_vader_component.py` — This file contains the backend logic for the new bundle. Create multiple `.py` files for multiple components.
+ * `__init__.py` — This file initializes the bundle components. You can use any existing `__init__.py` as an example to see how it should be structured.
-For an example of adding multiple components in a bundle, see the [Notion](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/components/Notion) bundle.
+ For an example of adding multiple components in a bundle, see the [Notion](https://github.com/langflow-ai/langflow/tree/main/src/lfx/src/lfx/components/Notion) bundle.
## Add the bundle to the frontend folder
1. Navigate to the frontend directory in the Langflow repository to add your bundle's icon.
-The path for your new component icon is `src > frontend > src > icons > DarthVader`
+The path for your new component icon is `src/frontend/src/icons/DarthVader`
You can view the [icons folder](https://github.com/langflow-ai/langflow/tree/main/src/frontend/src/icons) in the Langflow repository.
To add your icon, create **three** files inside the `icons/darth_vader` folder.
@@ -105,12 +107,12 @@ For example:
import("@/icons/DeepSeek").then((mod) => ({ default: mod.DeepSeekIcon })),
```
-8. To add your bundle to the **Bundles** menu, edit the [`SIDEBAR_BUNDLES` array](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/utils/styleUtils.ts#L231) in `/src/frontend/src/utils/styleUtils.ts`.
+8. To add your bundle to the **Bundles** menu, edit the [`SIDEBAR_BUNDLES` array](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/utils/styleUtils.ts#L243) in `/src/frontend/src/utils/styleUtils.ts`.
Add an object to the array with the following keys:
* `display_name`: The text label shown in the Langflow visual editor
- * `name`: The name of the folder you created within the `/src/backend/base/langflow/components` directory
+ * `name`: The name of the folder you created within the `/src/lfx/src/lfx/components` directory
* `icon`: The name of the bundle's icon that you defined in the previous steps
For example:
@@ -126,7 +128,7 @@ For example:
In your component bundle, associate the icon variable with your new bundle.
In your `darth_vader_component.py` file, in the component class, include the icon that you defined in the frontend.
-The `icon` must point to the directory you created for your icons within the `src > frontend > src > icons` directory.
+The `icon` must point to the directory you created for your icons within the `src/frontend/src/icons` directory.
For example:
```
class DarthVaderAPIComponent(LCToolComponent):
diff --git a/docs/docs/Contributing/contributing-component-tests.mdx b/docs/docs/Contributing/contributing-component-tests.mdx
index b237d6d0edf0..6a9e2b0db02a 100644
--- a/docs/docs/Contributing/contributing-component-tests.mdx
+++ b/docs/docs/Contributing/contributing-component-tests.mdx
@@ -9,17 +9,17 @@ This guide outlines how to structure and implement tests for application compone
* The test file should follow the same directory structure as the component being tested, but should be placed in the corresponding unit tests folder.
- For example, if the file path for the component is `src/backend/base/langflow/components/prompts/`, then the test file should be located at `src/backend/tests/unit/components/prompts`.
+ For example, if the file path for the component is `src/lfx/src/lfx/components/data/`, then the test file should be located at `src/backend/tests/unit/components/data`.
* The test file name should use snake case and follow the pattern `test_.py`.
- For example, if the file to be tested is `PromptComponent.py`, then the test file should be named `test_prompt_component.py`.
+ For example, if the file to be tested is `FileComponent.py`, then the test file should be named `test_file_component.py`.
## File structure
* Each test file should group tests into classes by component. There should be no standalone test functions in the file— only test methods within classes.
* Class names should follow the pattern `Test`.
-For example, if the component being tested is `PromptComponent`, then the test class should be named `TestPromptComponent`.
+For example, if the component being tested is `FileComponent`, then the test class should be named `TestFileComponent`.
## Imports, inheritance, and mandatory methods
@@ -39,7 +39,7 @@ These base classes enforce mandatory methods that the component test classes mus
```python
@pytest.fixture
def component_class(self):
- return PromptComponent
+ return FileComponent
```
* `default_kwargs:` Returns a dictionary with the default arguments required to instantiate the component. For example:
@@ -47,7 +47,7 @@ These base classes enforce mandatory methods that the component test classes mus
```python
@pytest.fixture
def default_kwargs(self):
- return {"template": "Hello {name}!", "name": "John", "_session_id": "123"}
+ return {"file_path": "/tmp/test.txt", "_session_id": "123"}
```
* `file_names_mapping:` Returns a list of dictionaries representing the relationship between `version`, `module`, and `file_name` that the tested component has had over time. This can be left empty if it is an unreleased component. For example:
@@ -56,11 +56,11 @@ These base classes enforce mandatory methods that the component test classes mus
@pytest.fixture
def file_names_mapping(self):
return [
- {"version": "1.0.15", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.16", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.17", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.18", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.19", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.15", "module": "data", "file_name": "File"},
+ {"version": "1.0.16", "module": "data", "file_name": "File"},
+ {"version": "1.0.17", "module": "data", "file_name": "File"},
+ {"version": "1.0.18", "module": "data", "file_name": "File"},
+ {"version": "1.0.19", "module": "data", "file_name": "File"},
]
```
@@ -101,14 +101,13 @@ Once the basic structure of the test file is defined, implement test methods for
After executing the `.to_frontend_node()` method, the resulting data is available for verification in the dictionary `frontend_node["data"]["node"]`. Assertions should be clear and cover the expected outcomes.
```python
- def test_post_code_processing(self, component_class, default_kwargs):
+ def test_file_component_processing(self, component_class, default_kwargs):
component = component_class(**default_kwargs)
frontend_node = component.to_frontend_node()
node_data = frontend_node["data"]["node"]
- assert node_data["template"]["template"]["value"] == "Hello {name}!"
- assert "name" in node_data["custom_fields"]["template"]
- assert "name" in node_data["template"]
- assert node_data["template"]["name"]["value"] == "John"
+ assert node_data["template"]["path"]["file_path"] == "/tmp/test.txt"
+ assert "path" in node_data["template"]
+ assert node_data["display_name"] == "File"
```
\ No newline at end of file
diff --git a/docs/docs/Contributing/contributing-components.mdx b/docs/docs/Contributing/contributing-components.mdx
index 1f9c84ee6960..7929ac7ce714 100644
--- a/docs/docs/Contributing/contributing-components.mdx
+++ b/docs/docs/Contributing/contributing-components.mdx
@@ -3,84 +3,23 @@ title: Contribute components
slug: /contributing-components
---
+import PartialBasicComponentStructure from '../_partial-basic-component-structure.mdx';
-New components are added as objects of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class.
+New components are added as objects of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/custom/custom_component/component.py) class.
-Dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L148) file.
+Dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml) file.
## Contribute an example component to Langflow
Anyone can contribute an example component. For example, to create a new data component called **DataFrame processor**, follow these steps to contribute it to Langflow.
-1. Create a Python file called `dataframe_processor.py`.
-2. Write your processor as an object of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. You'll create a new class, `DataFrameProcessor`, that will inherit from `Component` and override the base class's methods.
+
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-```
-
-3. Define class attributes to provide information about your custom component:
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-
- display_name: str = "DataFrame Processor"
- description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
- documentation: str = "https://docs.langflow.org/components-dataframe-processor"
- icon: str = "DataframeIcon"
- priority: int = 100
- name: str = "dataframe_processor"
-```
-
- * `display_name`: A user-friendly name shown in the visual editor.
- * `description`: A brief description of what your component does.
- * `documentation`: A link to detailed documentation.
- * `icon`: An emoji or icon identifier for visual representation.
- For more information, see [Contributing bundles](/contributing-bundles#add-the-bundle-to-the-frontend-folder).
- * `priority`: An optional integer to control display order. Lower numbers appear first.
- * `name`: An optional internal identifier that defaults to class name.
-
-4. Define the component's interface by specifying its inputs, outputs, and the method that will process them. The method name must match the `method` field in your outputs list, as this is how Langflow knows which method to call to generate each output.
-This example creates a minimal custom component skeleton.
-For more information on creating your custom component, see [Create custom Python components](/components-custom-components).
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-
- display_name: str = "DataFrame Processor"
- description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
- documentation: str = "https://docs.langflow.org/components-dataframe-processor"
- icon: str = "DataframeIcon"
- priority: int = 100
- name: str = "dataframe_processor"
-
- # input and output lists
- inputs = []
- outputs = []
-
- # method
- def some_output_method(self):
- return ...
-```
-
-5. Save the `dataframe_processor.py` to the `src > backend > base > langflow > components` directory.
+5. Save the `dataframe_processor.py` to the `src/lfx/src/lfx/components` directory.
This example adds a data component, so add it to the `/data` directory.
-6. Add the component dependency to `src > backend > base > langflow > components > data > __init__.py` as `from .DataFrameProcessor import DataFrameProcessor`.
-You can view the [/data/__init__.py](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/components/data/__init__.py) in the Langflow repository.
+6. Add the component dependency to `src/lfx/src/lfx/components/data/__init__.py` as `from .DataFrameProcessor import DataFrameProcessor`.
+You can view the [/data/__init__.py](https://github.com/langflow-ai/langflow/blob/dev/src/lfx/src/lfx/components/data/__init__.py) in the Langflow repository.
7. Add any new dependencies to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L20) file.
diff --git a/docs/docs/Deployment/deployment-docker.mdx b/docs/docs/Deployment/deployment-docker.mdx
index bbd92921bdb0..2ac099baf76a 100644
--- a/docs/docs/Deployment/deployment-docker.mdx
+++ b/docs/docs/Deployment/deployment-docker.mdx
@@ -163,7 +163,7 @@ FROM langflowai/langflow:latest
WORKDIR /app
# Copy your modified memory component
-COPY src/backend/base/langflow/components/helpers/memory.py /tmp/memory.py
+COPY src/lfx/src/lfx/components/helpers/memory.py /tmp/memory.py
# Find the site-packages directory where langflow is installed
RUN python -c "import site; print(site.getsitepackages()[0])" > /tmp/site_packages.txt
@@ -198,7 +198,7 @@ To use this custom Dockerfile, do the following:
In this example, Langflow expects `memory.py` to exist in the `/helpers` directory, so you create a directory in that location.
```bash
- mkdir -p src/backend/base/langflow/components/helpers
+ mkdir -p src/lfx/src/lfx/components/helpers
```
3. Place your modified `memory.py` file in the `/helpers` directory.
diff --git a/docs/docs/Develop/api-keys-and-authentication.mdx b/docs/docs/Develop/api-keys-and-authentication.mdx
index 8329fe757e6b..3398921aa8fb 100644
--- a/docs/docs/Develop/api-keys-and-authentication.mdx
+++ b/docs/docs/Develop/api-keys-and-authentication.mdx
@@ -27,6 +27,10 @@ You can use Langflow API keys to interact with Langflow programmatically.
By default, most Langflow API endpoints, such as `/v1/run/$FLOW_ID`, require authentication with a Langflow API key.
+Langflow validates API keys against keys stored in the database, but you can configure Langflow to validate API keys against an environment variable instead.
+For more information, see [`LANGFLOW_API_KEY_SOURCE`](#langflow-api-key-source).
+
+To require API key authentication for flow webhook endpoints, use the [`LANGFLOW_WEBHOOK_AUTH_ENABLE`](/webhook#require-authentication-for-webhooks) environment variable.
To configure authentication for Langflow MCP servers, see [Use Langflow as an MCP server](/mcp-server).
### Langflow API key permissions
@@ -38,6 +42,7 @@ A Langflow API key cannot be used to access resources outside of your own Langfl
In single-user environments, you are always a superuser, and your Langflow API keys always have superuser privileges.
In multi-user environments, users who aren't superusers cannot use their API keys to access other users' resources.
+Superusers can only run their own flows, and cannot run flows owned by other users.
You must [start your Langflow server with authentication enabled](#start-a-langflow-server-with-authentication-enabled) to allow user management and creation of non-superuser accounts.
### Create a Langflow API key
@@ -282,6 +287,122 @@ LANGFLOW_NEW_USER_IS_ACTIVE=False
Only superusers can manage user accounts for a Langflow server, but user management only matters if your server has authentication enabled.
For more information, see [Start a Langflow server with authentication enabled](#start-a-langflow-server-with-authentication-enabled).
+### LANGFLOW_API_KEY_SOURCE {#langflow-api-key-source}
+
+This variable controls how Langflow validates API keys.
+
+| Value | Description |
+|-------|-------------|
+| `db` (default) | Validates API keys against [Langflow API keys](#langflow-api-keys) stored in the database. This is the standard behavior where users create and manage API keys through the Langflow UI or CLI. |
+| `env` | Validates API keys against the `LANGFLOW_API_KEY` environment variable. Useful for Kubernetes deployments, CI/CD pipelines, or any environment where you want to inject a pre-defined API key without database configuration. |
+
+By default, Langflow validates the `x-api-key` header against the Langflow database with `LANGFLOW_API_KEY_SOURCE=db`.
+When using database-based validation, you can create multiple keys with per-user permissions, track usage, and manage keys through the Langflow UI or CLI.
+
+When `LANGFLOW_API_KEY_SOURCE=env`, Langflow validates the `x-api-key` header against the value of the `LANGFLOW_API_KEY` environment variable.
+This means Langflow runs securely in stateless environments, such as with LFX or Kubernetes secrets.
+
+When `LANGFLOW_API_KEY_SOURCE=env`, only a single API key can be used for the deployment. All authenticated requests use the same API key, and successful authentication grants superuser privileges.
+This mode is designed for single-tenant deployments or automated systems, not multi-user environments where different users need different access levels. To rotate your keys, update the environment variable and restart the Langflow server.
+
+To enable environment-based API key validation:
+
+1. In the Langflow `.env` file, set the API key source to `env`:
+
+ ```text
+ LANGFLOW_API_KEY_SOURCE=env
+ ```
+
+2. In the Langflow `.env` file, set the API key value:
+
+ ```text
+ LANGFLOW_API_KEY=your-secure-api-key
+ ```
+
+3. Use the API key in your requests:
+
+ ```shell
+ curl -X POST \
+ "http://LANGFLOW_SERVER_ADDRESS/api/v1/run/FLOW_ID?stream=false" \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: LANGFLOW_API_KEY" \
+ -d '{"inputs": {"text":""}, "tweaks": {}}'
+ ```
+
+ Replace `LANGFLOW_SERVER_ADDRESS`, `FLOW_ID`, and `LANGFLOW_API_KEY` with the values from your deployment.
+
+
+Kubernetes deployment example
+
+To configure an environment-based API key in a Kubernetes Secret, do the following:
+
+1. Create a Kubernetes Secret with your API key:
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ name: langflow-api-key
+ type: Opaque
+ stringData:
+ api-key: "YOUR_API_KEY"
+ ```
+
+ Replace `YOUR_API_KEY` with the `LANGFLOW_API_KEY` value from the Langflow `.env` file.
+
+2. Reference the `langflow-api-key` Secret in your Kubernetes deployment:
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: langflow
+ spec:
+ template:
+ spec:
+ containers:
+ - name: langflow
+ image: langflowai/langflow:latest
+ env:
+ - name: LANGFLOW_API_KEY_SOURCE
+ value: "env"
+ - name: LANGFLOW_API_KEY
+ valueFrom:
+ secretKeyRef:
+ name: langflow-api-key
+ key: api-key
+ ```
+
+
+
+
+Docker Compose example
+
+To configure an environment-based API key in Docker Compose, do the following:
+
+1. Set the API key in your Langflow `.env` file.
+
+ ```text
+ LANGFLOW_API_KEY=your-secure-api-key
+ ```
+
+ Replace `YOUR_API_KEY` with your actual Langflow API key value.
+
+2. Create or update your `docker-compose.yml` file to set `LANGFLOW_API_KEY_SOURCE=env` and reference the `LANGFLOW_API_KEY`.
+
+ ```yaml
+ services:
+ langflow:
+ image: langflowai/langflow:latest
+ environment:
+ - LANGFLOW_API_KEY_SOURCE=env
+ - LANGFLOW_API_KEY=${LANGFLOW_API_KEY}
+ ports:
+ - "7860:7860"
+ ```
+
+
+
### LANGFLOW_CORS_* {#cors-configuration-for-authentication}
Cross-Origin Resource Sharing (CORS) configuration controls how authentication credentials are handled when your Langflow frontend and backend are served from different origins.
@@ -318,6 +439,30 @@ LANGFLOW_CORS_ALLOW_METHODS=GET,POST,PUT
```
:::
+### SSRF protection {#ssrf-protection}
+
+The following environment variables configure Server-Side Request Forgery (SSRF) protection for the [**API Request** component](/api-request).
+SSRF protection prevents requests to internal or private network resources, such as private IP ranges, loopback addresses, and cloud metadata endpoints.
+
+| Variable | Format | Default | Description |
+|----------|--------|---------|-------------|
+| `LANGFLOW_SSRF_PROTECTION_ENABLED` | Boolean | `False` | Enable SSRF protection for the **API Request** component. When enabled, the component blocks requests to private IP addresses. When disabled, requests are not blocked. |
+| `LANGFLOW_SSRF_ALLOWED_HOSTS` | List[String] | Not set | A comma-separated list of allowed hosts, IP addresses, or CIDR ranges that can bypass SSRF protection checks. For example: `192.168.1.0/24,10.0.0.5,*.internal.company.local`.|
+
+### LANGFLOW_WEBHOOK_AUTH_ENABLE {#langflow-webhook-auth-enable}
+
+This variable controls whether API key authentication is required for webhook endpoints.
+
+| Variable | Format | Default | Description |
+|----------|--------|---------|-------------|
+| `LANGFLOW_WEBHOOK_AUTH_ENABLE` | Boolean | `False` | When `True`, webhook endpoints require API key authentication and validate that the authenticated user owns the flow being executed. When `False`, no Langflow API key is required and all requests to the webhook endpoint are treated as being sent by the flow owner. |
+
+By default, webhooks run as the flow owner without authentication with `LANGFLOW_WEBHOOK_AUTH_ENABLE=False`.
+
+To require API key authentication for webhooks, in your Langflow `.env` file, set `LANGFLOW_WEBHOOK_AUTH_ENABLE=True`.
+
+When webhook authentication is enabled, you must provide a Langflow API key with each webhook request as an HTTP header or query parameter. For more information, see [Require authentication for webhooks](/webhook#require-authentication-for-webhooks).
+
## Start a Langflow server with authentication enabled
This section shows you how to use the [authentication environment variables](/api-keys-and-authentication#authentication-environment-variables) to deploy a Langflow server with authentication enabled.
@@ -353,7 +498,7 @@ Additionally, you must sign in as a superuser to manage users and [create a Lang
If you don't set a secret key, Langflow generates one automatically, but this isn't recommended for production environments.
- For instructions on generating at setting a secret key, see [`LANGFLOW_SECRET_KEY`](#langflow-secret-key).
+ For instructions on generating and setting a secret key, see [`LANGFLOW_SECRET_KEY`](#langflow-secret-key).
4. Save your `.env` file with the populated variables. For example:
diff --git a/docs/docs/Develop/concepts-file-management.mdx b/docs/docs/Develop/concepts-file-management.mdx
index 51fb1dc0d63a..a0c8df572f67 100644
--- a/docs/docs/Develop/concepts-file-management.mdx
+++ b/docs/docs/Develop/concepts-file-management.mdx
@@ -7,8 +7,7 @@ import Icon from "@site/src/components/icon";
Each Langflow server has a file management system where you can store files that you want to use in your flows.
-Files uploaded to Langflow file management are stored locally in your [Langflow configuration directory](/memory), and they are available to all of your flows.
-Local storage is set by `LANGFLOW_STORAGE_TYPE`, which has only one allowed value (`local`).
+Files uploaded to Langflow file management are stored in Langflow's [storage backend (local or AWS S3)](/concepts-file-management#configure-file-storage), and they are available to all of your flows.
Uploading files to Langflow file management keeps your files in a central location, and allows you to reuse files across flows without repeated manual uploads.
@@ -46,28 +45,28 @@ To modify this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment va
## Use files in a flow
-To use files in your Langflow file management system in a flow, add a component that accepts file input to your flow, such as the **File** component.
+To use files in your Langflow file management system in a flow, add a component that accepts file input to your flow, such as the **Read File** component.
-For example, add a **File** component to your flow, click **Select files**, and then select files from the **My Files** list.
+For example, add a **Read File** component to your flow, click **Select files**, and then select files from the **My Files** list.
-This list includes all files in your server's file management system, but you can only select [file types that are supported by the **File** component](/components-data#file).
+This list includes all files in your server's file management system, but you can only select [file types that are supported by the **Read File** component](/read-file).
If you need another file type, you must use a different component that supports that file type, or you need to convert it to a supported type before uploading it.
-For more information about the **File** component and other data loading components, see [Data components](/components-data).
+For more information about the **Read File** component and other data loading components, see the [**Read file** component](/read-file).
### Load files at runtime
You can use preloaded files in your flows, and you can load files at runtime, if your flow accepts file input.
To enable file input in your flow, do the following:
-1. Add a [**File** component](/components-data#file) to your flow.
+1. Add a [**Read File** component](/read-file) to your flow.
2. Click **Share**, select **API access**, and then click **Input Schema** to add [`tweaks`](/concepts-publish#input-schema) to the request payload in the flow's automatically generated code snippets.
3. Expand the **File** section, find the **Files** row, and then enable **Expose Input** to allow the parameter to be set at runtime through the Langflow API.
4. Close the **Input Schema** pane to return to the **API access** pane.
-The payload in each code snippet now includes `tweaks` with your **File** component's ID and the `path` key that you enabled in **Input Schema**:
+The payload in each code snippet now includes `tweaks` with your **Read File** component's ID and the `path` key that you enabled in **Input Schema**:
```json
"tweaks": {
@@ -126,7 +125,81 @@ For more specialized image processing, browse [**Bundles**](/components-bundle-components).
+## Configure file storage
+
+Langflow supports two storage backends for file management:
+
+* **Local storage**: Langflow's default storage backend. Files are stored locally in your [Langflow configuration directory](/memory). Set `LANGFLOW_STORAGE_TYPE=local` or leave it unset to use local storage.
+
+* **S3 storage**: Files are stored in an AWS S3 bucket.
+Langflow uses the [boto3](https://boto3.amazonaws.com/v1/documentation/api/latest/index.html) library to interact with S3.
+
+To use S3 as your file storage backend, add the following configuration to your `.env` file:
+
+```text
+# S3 Storage Configuration
+LANGFLOW_STORAGE_TYPE=s3
+LANGFLOW_OBJECT_STORAGE_BUCKET_NAME=S3_BUCKET_NAME
+LANGFLOW_OBJECT_STORAGE_PREFIX=S3_BUCKET_DIRECTORY
+
+# AWS Credentials (required for S3)
+AWS_ACCESS_KEY_ID=S3_ACCESS_KEY
+AWS_SECRET_ACCESS_KEY=S3_ACCESS_SECRET_KEY
+AWS_DEFAULT_REGION=S3_REGION
+```
+
+Replace the following placeholders with the actual values for your S3 instance:
+
+* `S3_BUCKET_NAME`: The name of your S3 bucket.
+* `S3_BUCKET_DIRECTORY`: An optional folder path within the bucket where files are stored, such as `s3://S3_BUCKET_NAME/S3_BUCKET_DIRECTORY`.
+* `S3_ACCESS_KEY`: Your AWS Access Key ID.
+* `S3_ACCESS_SECRET_KEY`: Your AWS Secret Access Key.
+* `S3_REGION`: The AWS region where your bucket is located, such as `us-east-2`.
+
+Your AWS credentials must have the necessary permissions to perform the required S3 operations for your use case, such as reading, writing, and deleting files in S3.
+This example policy allows basic CRUD operations on S3 objects.
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Sid": "LangflowS3StorageAccess",
+ "Effect": "Allow",
+ "Action": [
+ "s3:PutObject",
+ "s3:GetObject",
+ "s3:DeleteObject",
+ "s3:ListBucket",
+ "s3:PutObjectTagging",
+ ],
+ "Resource": [
+ "arn:aws:s3:::S3_BUCKET_NAME",
+ "arn:aws:s3:::S3_BUCKET_NAME/S3_BUCKET_DIRECTORY/*"
+ ]
+ }
+ ]
+}
+```
+
+Replace the following placeholders with the actual values for your IAM policy and S3 instance:
+
+* `S3_BUCKET_NAME`: The name of your S3 bucket.
+* `S3_BUCKET_DIRECTORY`: An optional folder path within the bucket where files are stored, such as `s3://S3_BUCKET_NAME/S3_BUCKET_DIRECTORY`.
+
+For more information, see the [AWS documentation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html).
+
+## File storage environment variables {#file-storage-environment-variables}
+
+The following environment variables configure file storage backends for Langflow's file management system:
+
+| Variable | Format | Default | Description |
+|----------|--------|---------|-------------|
+| `LANGFLOW_STORAGE_TYPE` | String | `local` | Set the file storage backend. Supported values: `local` (files stored in the Langflow configuration directory) or `s3` (files stored in AWS S3). For S3 storage, you must also configure AWS credentials and bucket settings. |
+| `LANGFLOW_OBJECT_STORAGE_BUCKET_NAME` | String | Not set | The name of the S3 bucket to use for file storage. Required when `LANGFLOW_STORAGE_TYPE=s3`. |
+| `LANGFLOW_OBJECT_STORAGE_PREFIX` | String | Not set | Optional prefix/folder path within the S3 bucket where files will be stored. If not set, files are stored at the bucket root. |
+| `LANGFLOW_OBJECT_STORAGE_TAGS` | JSON object | Not set | Optional S3 object tags applied to stored files when `LANGFLOW_STORAGE_TYPE=s3`. Ignored for local storage. Provided as a JSON map of string keys to string values, such as `{"env": "prod", "owner": "data-team"}`. |
+
## See also
-* [Data components](/components-data)
-* [Processing components](/components-processing)
\ No newline at end of file
+* [Components reference](/concepts-components)
\ No newline at end of file
diff --git a/docs/docs/Develop/contributing-telemetry.mdx b/docs/docs/Develop/contributing-telemetry.mdx
index cd3695aa109d..691d8aa023af 100644
--- a/docs/docs/Develop/contributing-telemetry.mdx
+++ b/docs/docs/Develop/contributing-telemetry.mdx
@@ -50,6 +50,22 @@ This telemetry event is sent once when the telemetry service starts.
- **BackendOnly**: Boolean indicating whether Langflow is running in backend-only mode, useful for understanding deployment configurations.
- **Desktop**: Indicates whether Langflow is running in desktop mode (Langflow Desktop), helping to understand usage patterns across different deployment types.
+### Email
+
+This telemetry event is sent to track registered email addresses for Langflow Desktop. The event is triggered in two cases:
+
+* Every time a new email address is registered through the POST `/api/v2/registration/` endpoint.
+* Each time you start Langflow Desktop _after_ an email address is registered.
+
+ The first time you start Langflow Desktop and register your email address, the event is reported by the call to the POST `/api/v2/registration/` endpoint.
+
+This telemetry event includes the following information:
+
+- **Email**: The registered email address, which helps track user registrations and facilitate an understanding of the Langflow Desktop user base.
+- **ClientType**: Indicates the client type, which can be "desktop" or "oss".
+
+If telemetry is disabled with the `DO_NOT_TRACK` environment variable in Langflow Desktop, you are still prompted to enter your email address, but the email address is stored in your local Langflow database only.
+
### Playground
This telemetry event monitors performance and usage patterns in the **Playground** environment.
diff --git a/docs/docs/Develop/data-types.mdx b/docs/docs/Develop/data-types.mdx
index 640ac14786e8..ebe194763b05 100644
--- a/docs/docs/Develop/data-types.mdx
+++ b/docs/docs/Develop/data-types.mdx
@@ -22,7 +22,7 @@ When building flows, connect output ports to input ports of the same type (color
* In the [workspace](/concepts-overview#workspace), hover over a port to see connection details for that port.
Click a port to **Search** for compatible components.
-* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/components-processing#type-convert) to convert the data between components.
+* If two components have incompatible data types, you can use a processing component like the [**Type Convert** component](/type-convert) to convert the data between components.
:::
## Data
@@ -140,7 +140,7 @@ For information about the underlying Python classes that produce `Embeddings`, s
The `LanguageModel` type is a specific data type that can be produced by language model components and accepted by components that use an LLM.
When you change a language model component's output type from **Model Response** to **Language Model**, the component's output port changes from a **Message** port to a **Language Model** port .
-Then, you connect the outgoing **Language Model** port to a **Language Model** input port on a compatible component, such as a **Smart Function** component.
+Then, you connect the outgoing **Language Model** port to a **Language Model** input port on a compatible component, such as a **Smart Transform** component.
For more information about using these components in flows and toggling `LanguageModel` output, see [Language model components](/components-models#language-model-output-types).
@@ -159,7 +159,7 @@ You can inspect the [component code](/concepts-components#component-code) to see
**Memory** ports are used to integrate a **Message History** component with external chat memory storage.
-For more information, see the [**Message History** component](/components-helpers#message-history).
+For more information, see the [**Message History** component](/message-history).
## Message
@@ -217,14 +217,14 @@ The strictness depends on the component.
### Message data in Input and Output components
-In flows with [**Chat Input and Output** components](/components-io#chat-io), `Message` data provides a consistent structure for chat interactions, and it is ideal for chatbots, conversational analysis, and other use cases based on a dialogue with an LLM or agent.
+In flows with [**Chat Input and Output** components](/chat-input-and-output), `Message` data provides a consistent structure for chat interactions, and it is ideal for chatbots, conversational analysis, and other use cases based on a dialogue with an LLM or agent.
In these flows, the **Playground** chat interface prints only the `Message` attributes that are relevant to the conversation, such as `text`, `files`, and error messages from `content_blocks`.
To see all `Message` attributes, inspect the message logs in the **Playground**.
-In flows with [**Text Input and Output** components](/components-io#text-io), `Message` data is used to pass simple text strings without the chat-related metadata.
+In flows with [**Text Input and Output** components](/text-input-and-output), `Message` data is used to pass simple text strings without the chat-related metadata.
These components handle `Message` data as independent text strings, not as part of an ongoing conversation.
For this reason, a flow with only **Text Input and Output** components isn't compatible with the **Playground**.
-For more information, see [Input and output components](/components-io).
+For more information, see [Text input and output components](/text-input-and-output).
When using the Langflow API, the response includes the `Message` object along with other response data from the flow run.
Langflow API responses can be extremely verbose, so your applications must include code to extract relevant data from the response to return to the user.
@@ -255,7 +255,7 @@ Hover over the port to see the accepted or produced data types.
In Langflow, you can use **Inspect output** to view the output of individual components.
This can help you learn about the different data type and debug problems with invalid or malformed inputs and output.
-The following example shows how to inspect the output of a [**Type Convert** component](/components-processing#type-convert), which can convert data from one type to another:
+The following example shows how to inspect the output of a [**Type Convert** component](/type-convert), which can convert data from one type to another:
1. Create a flow, and then connect a **Chat Input** component to a **Type Convert** component.
@@ -336,7 +336,6 @@ The following example shows how to inspect the output of a [**Type Convert** com
## See also
-- [Processing components](/components-processing)
- [Custom components](/components-custom-components)
- [Pydantic Models](https://docs.pydantic.dev/latest/api/base_model/)
- [pandas.DataFrame](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html)
\ No newline at end of file
diff --git a/docs/docs/Develop/environment-variables.mdx b/docs/docs/Develop/environment-variables.mdx
index 01c3251e9361..34cd4e6f783e 100644
--- a/docs/docs/Develop/environment-variables.mdx
+++ b/docs/docs/Develop/environment-variables.mdx
@@ -352,9 +352,9 @@ To make environment variables available to GUI apps on macOS, you need to use `l
/bin/sh
-c
- launchctl setenv LANGFLOW_CONFIG_DIR /Users/your_user/custom/config &&
- launchctl setenv LANGFLOW_PORT 7860 &&
- launchctl setenv LANGFLOW_HOST localhost &&
+ launchctl setenv LANGFLOW_CONFIG_DIR /Users/your_user/custom/config ;
+ launchctl setenv LANGFLOW_PORT 7860 ;
+ launchctl setenv LANGFLOW_HOST localhost ;
launchctl setenv ARIZE_API_KEY ak-...
@@ -366,7 +366,7 @@ To make environment variables available to GUI apps on macOS, you need to use `l
4. Load the file with `launchctl`:
```bash
- launchctl load ~/Library/LaunchAgents/dev.langflow.env.plist
+ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/dev.langflow.env.plist
```
@@ -456,12 +456,15 @@ The following environment variables set base Langflow server configuration, such
| `LANGFLOW_SSL_KEY_FILE` | String | Not set | Path to the SSL key file for enabling HTTPS on the Langflow web server. This is separate from [database SSL connections](/configuration-custom-database#connect-langflow-to-a-local-postgresql-database). |
| `LANGFLOW_DEACTIVATE_TRACING` | Boolean | `False` | Deactivate tracing functionality. |
| `LANGFLOW_CELERY_ENABLED` | Boolean | `False` | Enable Celery for distributed task processing. |
+| `LANGFLOW_ALEMBIC_LOG_TO_STDOUT` | Boolean | `False` | Whether to log Alembic database migration output to stdout instead of a log file. If `true`, Alembic logs to `stdout` and the default log file is ignored. |
For more information about deploying Langflow servers, see [Langflow deployment overview](/deployment-overview).
### Storage
-See [Memory management options](/memory) and [Manage files](/concepts-file-management).
+For file storage environment variables, see [File storage environment variables](/concepts-file-management#file-storage-environment-variables).
+
+For database environment variables, including PostgreSQL configuration, see [Memory management options](/memory#configure-external-memory).
### Telemetry
diff --git a/docs/docs/Develop/logging.mdx b/docs/docs/Develop/logging.mdx
index 43ca6592b365..4cc438bb1f89 100644
--- a/docs/docs/Develop/logging.mdx
+++ b/docs/docs/Develop/logging.mdx
@@ -91,7 +91,7 @@ To monitor Langflow logs as they are generated, you can follow the log file:
## Flow and component logs
After you run a flow, you can inspect the logs for the each component and flow run.
-For example, you can inspect `Message` objects ingested and generated by [Input and Output components](/components-io).
+For example, you can inspect `Message` objects ingested and generated by [Input and Output components](/chat-input-and-output).
### View flow logs
diff --git a/docs/docs/Develop/memory.mdx b/docs/docs/Develop/memory.mdx
index 57506253b1c1..e3c19f8e483b 100644
--- a/docs/docs/Develop/memory.mdx
+++ b/docs/docs/Develop/memory.mdx
@@ -100,6 +100,8 @@ To fine-tune your database connection pool and timeout settings, you can set the
Don't use the deprecated environment variables `LANGFLOW_DB_POOL_SIZE` or `LANGFLOW_DB_MAX_OVERFLOW`.
Instead, use `pool_size` and `max_overflow` in `LANGFLOW_DB_CONNECTION_SETTINGS`.
+* `LANGFLOW_MIGRATION_LOCK_NAMESPACE`: Optional namespace for PostgreSQL advisory locks used during database migrations. This is useful when running multiple Langflow instances that share the same PostgreSQL database. Each instance should use a unique namespace to avoid conflicts. If not set, Langflow uses a default namespace. This setting only applies when using PostgreSQL as your database backend.
+
## Configure cache memory
The default Langflow caching behavior is an asynchronous, in-memory cache:
@@ -174,7 +176,7 @@ All messages are stored in [Langflow storage](#storage-options-and-paths), and t
Typically, this is necessary only if you have specific storage needs that aren't met by Langflow storage.
For example, if you want to manage chat memory data by directly working with the database, or if you want to use a different database than the default Langflow storage.
-For more information and examples, see [**Message History** component](/components-helpers#message-history) and [Agent memory](/agents#agent-memory).
+For more information and examples, see [**Message History** component](/message-history) and [Agent memory](/agents#agent-memory).
## See also
diff --git a/docs/docs/Develop/session-id.mdx b/docs/docs/Develop/session-id.mdx
index 6f8ba4f797c9..ee3e91833da7 100644
--- a/docs/docs/Develop/session-id.mdx
+++ b/docs/docs/Develop/session-id.mdx
@@ -34,7 +34,7 @@ The `my_custom_session_value` value is used in components that accept it, and th
## Retrieval of messages from memory by session ID
-To retrieve messages from local Langflow memory, add a [**Message History** component](/components-helpers#message-history) to your flow.
+To retrieve messages from local Langflow memory, add a [**Message History** component](/message-history) to your flow.
The component accepts `sessionID` as a filter parameter, and uses the session ID value from upstream automatically to retrieve message history by session ID from storage.
Messages can be retrieved by `session_id` from the Langflow API at `GET /v1/monitor/messages`. For more information, see [Monitor endpoints](https://docs.langflow.org/api-monitor).
diff --git a/docs/docs/Flows/concepts-playground.mdx b/docs/docs/Flows/concepts-playground.mdx
index 8511b53d68eb..6adf662077fd 100644
--- a/docs/docs/Flows/concepts-playground.mdx
+++ b/docs/docs/Flows/concepts-playground.mdx
@@ -17,7 +17,7 @@ The **Playground** allows you to quickly iterate over your flow's logic and beha
## Run a flow in the Playground
To run a flow in the **Playground**, open the flow, and then click **Playground**.
-Then, if your flow has a [**Chat Input** component](/components-io), enter a prompt or [use voice mode](/concepts-voice-mode) to trigger the flow and start a chat session.
+Then, if your flow has a [**Chat Input** component](/chat-input-and-output), enter a prompt or [use voice mode](/concepts-voice-mode) to trigger the flow and start a chat session.
:::tip
If there is no message input field in the **Playground**, make sure your flow has a **Chat Input** component that is connected, directly or indirectly, to the **Input** port of a **Language Model** or **Agent** component.
@@ -82,7 +82,7 @@ You can set custom session IDs in the visual editor and programmatically.
-In your [input and output components](/components-io), use the **Session ID** field:
+In your [input and output components](/chat-input-and-output), use the **Session ID** field:
1. Click the component where you want to set a custom session ID.
2. In the [component's header menu](/concepts-components#component-menus), click **Controls**.
diff --git a/docs/docs/Flows/concepts-publish.mdx b/docs/docs/Flows/concepts-publish.mdx
index 4d1ba4ba3205..f898a45dc24a 100644
--- a/docs/docs/Flows/concepts-publish.mdx
+++ b/docs/docs/Flows/concepts-publish.mdx
@@ -122,7 +122,7 @@ For each flow, Langflow provides a code snippet that you can insert into the `
-1. In your RAG chatbot flow, click the **File** component, and then click **File**.
+1. In your RAG chatbot flow, click the **Read File** component, and then click **File**.
2. Select the local file you want to upload, and then click **Open**.
The file is loaded to your Langflow server.
3. To load the data into your vector database, click the vector store component, and then click **Run component** to run the selected component and all prior dependent components.
@@ -140,6 +140,7 @@ This tutorial uses JavaScript for demonstration purposes.
const readline = require('readline');
const { LangflowClient } = require('@datastax/langflow-client');
+ # pragma: allowlist nextline secret
const API_KEY = 'LANGFLOW_API_KEY';
const SERVER = 'LANGFLOW_SERVER_ADDRESS';
const FLOW_ID = 'FLOW_ID';
diff --git a/docs/docs/Tutorials/mcp-tutorial.mdx b/docs/docs/Tutorials/mcp-tutorial.mdx
index c1a64c892a2f..600c321979aa 100644
--- a/docs/docs/Tutorials/mcp-tutorial.mdx
+++ b/docs/docs/Tutorials/mcp-tutorial.mdx
@@ -148,7 +148,7 @@ You need one **MCP Tools** component for each MCP server that you want your flow
7. To test the weather MCP server, click **Playground**, and then ask the LLM `Is it safe to go hiking in the Adirondacks today?`
- The **Playground** shows you the agent's logic as it analyzes the request and select tools to use.
+ The **Playground** shows you the agent's logic as it analyzes the request and select tools to use.
Ideally, the agent's response will be more specific than the previous response because of the additional context provided by the weather MCP server.
For example:
@@ -162,7 +162,7 @@ You need one **MCP Tools** component for each MCP server that you want your flow
This is a better response, but what makes this MCP server more valuable than just calling a weather API?
- First, MCP servers are often customized for specific tasks, such as highly specialized actions or chained tools for complex, multi-step problem solving.
+ MCP servers are often customized for specific tasks, such as highly specialized actions or chained tools for complex, multi-step problem solving.
Typically, you would have to write a custom script for a specific task, possibly including multiple API calls in a single script, and then you would have to either execute this script outside the context of the agent or provide it to your agent in some way.
Instead, the MCP ensures that all MCP servers are added to agents in the same way, without having to know each server's specific endpoint structures or write custom integrations.
diff --git a/docs/docs/_partial-basic-component-structure.mdx b/docs/docs/_partial-basic-component-structure.mdx
new file mode 100644
index 000000000000..9d4bb37ab36a
--- /dev/null
+++ b/docs/docs/_partial-basic-component-structure.mdx
@@ -0,0 +1,70 @@
+1. Create a Python file for your component, such as `dataframe_processor.py`.
+
+2. Write your component as an object of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. Create a new class that inherits from `Component` and override the base class's methods.
+
+ :::tip Backwards compatibility
+ The `lfx` import path replaced the `import from langflow.custom import Component` in Langflow 1.7, but the original input is still compatible and works the same way.
+ :::
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+ ```
+
+3. Define class attributes to provide information about your custom component:
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+
+ display_name: str = "DataFrame Processor"
+ description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
+ documentation: str = "https://docs.langflow.org/components-dataframe-processor"
+ icon: str = "DataframeIcon"
+ priority: int = 100
+ name: str = "dataframe_processor"
+ ```
+
+ * `display_name`: A user-friendly name shown in the visual editor.
+ * `description`: A brief description of what your component does.
+ * `documentation`: A link to detailed documentation.
+ * `icon`: An emoji or icon identifier for visual representation.
+ Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
+ For more information, see [Contributing bundles](/contributing-bundles#add-the-bundle-to-the-frontend-folder).
+ * `priority`: An optional integer to control display order. Lower numbers appear first.
+ * `name`: An optional internal identifier that defaults to class name.
+4. Define the component's interface by specifying its inputs, outputs, and the method that will process them. The method name must match the `method` field in your outputs list, as this is how Langflow knows which method to call to generate each output.
+
+ This example creates a minimal custom component skeleton.
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+
+ display_name: str = "DataFrame Processor"
+ description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
+ documentation: str = "https://docs.langflow.org/components-dataframe-processor"
+ icon: str = "DataframeIcon"
+ priority: int = 100
+ name: str = "dataframe_processor"
+
+ # input and output lists
+ inputs = []
+ outputs = []
+
+ # method
+ def some_output_method(self):
+ return ...
+ ```
diff --git a/docs/docs/_partial-docker-docling-deps.mdx b/docs/docs/_partial-docker-docling-deps.mdx
new file mode 100644
index 000000000000..a8cbbff11c5f
--- /dev/null
+++ b/docs/docs/_partial-docker-docling-deps.mdx
@@ -0,0 +1,2 @@
+* **Docker/Linux system dependencies**: If running Langflow in a Docker container on Linux, you might need to install additional system packages for document processing. For more information, see [Document processing errors in Docker containers](/troubleshoot#document-processing-errors-in-docker-containers).
+
diff --git a/docs/docs/_partial-legacy.mdx b/docs/docs/_partial-legacy.mdx
index 14f325b6cb97..6795e496646f 100644
--- a/docs/docs/_partial-legacy.mdx
+++ b/docs/docs/_partial-legacy.mdx
@@ -9,7 +9,7 @@ If you aren't sure how to replace a legacy component, **Core components** provide generic functionality that can support multiple providers and use cases, such as the [**API Request** component](/components-data#api-request).
+For example, many **Core components** provide generic functionality that can support multiple providers and use cases, such as the [**API Request** component](/api-request).
If neither of these options are viable, you could use the legacy component's code to create your own custom component, or [start a discussion](/contributing-github-issues) about the legacy component.
diff --git a/docs/docs/_partial-vector-rag-flow.mdx b/docs/docs/_partial-vector-rag-flow.mdx
index 95efab2c822e..537bb233de41 100644
--- a/docs/docs/_partial-vector-rag-flow.mdx
+++ b/docs/docs/_partial-vector-rag-flow.mdx
@@ -38,7 +38,7 @@ Make sure the components connect to the same vector store, and that the componen
Mixing embedding models in the same vector store can produce inaccurate search results.
:::
-4. Recommended: In the [**Split Text** component](/components-processing#split-text), optimize the chunking settings for your embedding model.
+4. Recommended: In the [**Split Text** component](/split-text), optimize the chunking settings for your embedding model.
For example, if your embedding model has a token limit of 512, then the **Chunk Size** parameter must not exceed that limit.
Additionally, because the **Retriever** subflow passes the chat input directly to the vector store component for vector search, make sure that your chat input string doesn't exceed your embedding model's limits.
@@ -48,7 +48,7 @@ For example, if your embedding model has a token limit of 512, then the **Chunk
5. In the **Language Model** component, enter your OpenAI API key, or select a different provider and model to use for the chat portion of the flow.
6. Run the **Load Data** subflow to populate your vector store.
-In the **File** component, select one or more files, and then click **Run component** on the vector store component in the **Load Data** subflow.
+In the **Read File** component, select one or more files, and then click **Run component** on the vector store component in the **Load Data** subflow.
The **Load Data** subflow loads files from your local machine, chunks them, generates embeddings for the chunks, and then stores the chunks and their embeddings in the vector database.
diff --git a/docs/docusaurus.config.js b/docs/docusaurus.config.js
index e0fd589f204f..6506c7e05f8c 100644
--- a/docs/docusaurus.config.js
+++ b/docs/docusaurus.config.js
@@ -202,7 +202,18 @@ const config = {
},
{
to: "/concepts-components",
- from: ["/components", "/components-overview"],
+ from: [
+ "/components",
+ "/components-overview",
+ "/components-processing",
+ "/components-data",
+ "/components-files",
+ "/components-logic",
+ "/components-tools",
+ "/components-io",
+ "/components-helpers",
+ "/components-memories",
+ ],
},
{
to: "/configuration-global-variables",
@@ -327,10 +338,6 @@ const config = {
to: "/data-types",
from: "/concepts-objects",
},
- {
- to: "/components-helpers",
- from: "/components-memories",
- },
{
to: "/bundles-apify",
from: "/integrations-apify",
diff --git a/docs/openapi/openapi.json b/docs/openapi/openapi.json
index 1fe418f702e1..a88ea08a4173 100644
--- a/docs/openapi/openapi.json
+++ b/docs/openapi/openapi.json
@@ -2,7 +2,7 @@
"openapi": "3.1.0",
"info": {
"title": "Langflow",
- "version": "1.6.9"
+ "version": "1.7.0"
},
"paths": {
"/api/v1/build/{flow_id}/vertices": {
diff --git a/docs/sidebars.js b/docs/sidebars.js
index f74fce3845c4..6ad64663ebef 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -266,28 +266,90 @@ module.exports = {
type: "category",
label: "Core components",
items: [
- "Components/components-io",
- "Components/components-agents",
{
type: "category",
- label: "Models",
+ label: "Input / Output",
items: [
- "Components/components-models",
- "Components/components-embedding-models",
+ "Components/chat-input-and-output",
+ "Components/text-input-and-output",
+ "Components/webhook",
]
},
- "Components/components-data",
{
type: "category",
label: "Processing",
items: [
- "Components/components-processing",
+ "Components/data-operations",
+ "Components/dataframe-operations",
+ "Components/dynamic-create-data",
+ "Components/parser",
+ "Components/split-text",
+ "Components/type-convert",
+ ]
+ },
+ {
+ type: "category",
+ label: "Data Source",
+ items: [
+ "Components/api-request",
+ "Components/mock-data",
+ "Components/url",
+ "Components/web-search",
+ ]
+ },
+ {
+ type: "category",
+ label: "Files",
+ items: [
+ "Components/directory",
+ "Components/read-file",
+ "Components/write-file",
+ ]
+ },
+ {
+ type: "category",
+ label: "Flow Controls",
+ items: [
+ "Components/if-else",
+ "Components/loop",
+ "Components/notify-and-listen",
+ "Components/run-flow",
+ ]
+ },
+ {
+ type: "category",
+ label: "LLM Operations",
+ items: [
+ "Components/batch-run",
+ "Components/llm-selector",
+ "Components/smart-router",
+ "Components/smart-transform",
+ "Components/structured-output",
+ ]
+ },
+ {
+ type: "category",
+ label: "Models and Agents",
+ items: [
+ "Components/components-models",
"Components/components-prompts",
+ "Components/components-agents",
+ "Components/mcp-tools",
+ "Components/components-embedding-models",
+ "Components/message-history",
]
},
- "Components/components-logic",
- "Components/components-helpers",
- "Components/components-tools",
+ {
+ type: "category",
+ label: "Utilities",
+ items: [
+ "Components/calculator",
+ "Components/current-date",
+ "Components/python-interpreter",
+ "Components/sql-database",
+ ]
+ },
+ "Components/legacy-core-components",
],
},
{
@@ -296,6 +358,7 @@ module.exports = {
items: [
"Components/components-bundles",
"Components/bundles-aiml",
+ "Components/bundles-altk",
"Components/bundles-amazon",
"Components/bundles-anthropic",
"Components/bundles-apify",
@@ -310,8 +373,10 @@ module.exports = {
"Components/bundles-clickhouse",
"Components/bundles-cloudflare",
"Components/bundles-cohere",
+ "Components/bundles-cometapi",
"Components/bundles-composio",
"Components/bundles-couchbase",
+ "Components/bundles-cuga",
"Components/bundles-datastax",
"Components/bundles-deepseek",
"Components/bundles-docling",
@@ -432,20 +497,9 @@ module.exports = {
"Contributing/contributing-community",
"Contributing/contributing-how-to-contribute",
"Contributing/contributing-components",
+ "Contributing/contributing-bundles",
"Contributing/contributing-component-tests",
"Contributing/contributing-templates",
- "Contributing/contributing-bundles",
- ],
- },
- {
- type: "category",
- label: "Release notes",
- items: [
- {
- type: "doc",
- id: "Support/release-notes",
- label: "Release notes",
- },
],
},
{
@@ -467,6 +521,11 @@ module.exports = {
id: "Support/luna-for-langflow",
label: "IBM Elite Support for Langflow",
},
+ {
+ type: "doc",
+ id: "Support/release-notes",
+ label: "Release notes",
+ },
],
},
{
diff --git a/docs/static/img/agent-component.png b/docs/static/img/agent-component.png
index 795c2c7756e6..cc99d74a1bf1 100644
Binary files a/docs/static/img/agent-component.png and b/docs/static/img/agent-component.png differ
diff --git a/docs/static/img/agent-example-add-chat.png b/docs/static/img/agent-example-add-chat.png
index 3e5439f2d46a..5387fab24fc4 100644
Binary files a/docs/static/img/agent-example-add-chat.png and b/docs/static/img/agent-example-add-chat.png differ
diff --git a/docs/static/img/agent-example-add-tools.png b/docs/static/img/agent-example-add-tools.png
index 3293dd51e799..a76ef843ae21 100644
Binary files a/docs/static/img/agent-example-add-tools.png and b/docs/static/img/agent-example-add-tools.png differ
diff --git a/docs/static/img/agent-example-agent-as-tool.png b/docs/static/img/agent-example-agent-as-tool.png
index 0be4a3796756..34ba81319940 100644
Binary files a/docs/static/img/agent-example-agent-as-tool.png and b/docs/static/img/agent-example-agent-as-tool.png differ
diff --git a/docs/static/img/agent-example-run-flow-as-tool.png b/docs/static/img/agent-example-run-flow-as-tool.png
index 1578e2b2b8e7..52b53be30017 100644
Binary files a/docs/static/img/agent-example-run-flow-as-tool.png and b/docs/static/img/agent-example-run-flow-as-tool.png differ
diff --git a/docs/static/img/api-pane.png b/docs/static/img/api-pane.png
index 63542e9f2393..8ea721ed615c 100644
Binary files a/docs/static/img/api-pane.png and b/docs/static/img/api-pane.png differ
diff --git a/docs/static/img/component-astra-db-json-tool.png b/docs/static/img/component-astra-db-json-tool.png
index 48bdfa4c5612..26c60280d69e 100644
Binary files a/docs/static/img/component-astra-db-json-tool.png and b/docs/static/img/component-astra-db-json-tool.png differ
diff --git a/docs/static/img/component-cuga.png b/docs/static/img/component-cuga.png
new file mode 100644
index 000000000000..20042fb91c7d
Binary files /dev/null and b/docs/static/img/component-cuga.png differ
diff --git a/docs/static/img/component-data-operations-select-key.png b/docs/static/img/component-data-operations-select-key.png
index d391a47addeb..1c964885a16c 100644
Binary files a/docs/static/img/component-data-operations-select-key.png and b/docs/static/img/component-data-operations-select-key.png differ
diff --git a/docs/static/img/component-groq.png b/docs/static/img/component-groq.png
index b6aeea14ed9d..490cad5d88ff 100644
Binary files a/docs/static/img/component-groq.png and b/docs/static/img/component-groq.png differ
diff --git a/docs/static/img/component-ollama-embeddings-chromadb.png b/docs/static/img/component-ollama-embeddings-chromadb.png
index c4765f7eaa43..b2cb95e51aa3 100644
Binary files a/docs/static/img/component-ollama-embeddings-chromadb.png and b/docs/static/img/component-ollama-embeddings-chromadb.png differ
diff --git a/docs/static/img/component-ollama-model.png b/docs/static/img/component-ollama-model.png
index d19c5bc54302..b2cb95e51aa3 100644
Binary files a/docs/static/img/component-ollama-model.png and b/docs/static/img/component-ollama-model.png differ
diff --git a/docs/static/img/component-type-convert-and-web-search.png b/docs/static/img/component-type-convert-and-web-search.png
index 0c2539753b43..4ad23adaeae3 100644
Binary files a/docs/static/img/component-type-convert-and-web-search.png and b/docs/static/img/component-type-convert-and-web-search.png differ
diff --git a/docs/static/img/connect-data-components-to-agent.png b/docs/static/img/connect-data-components-to-agent.png
index 72dbc89d4974..ff9bb150fe52 100644
Binary files a/docs/static/img/connect-data-components-to-agent.png and b/docs/static/img/connect-data-components-to-agent.png differ
diff --git a/docs/static/img/ds-lf-docs.png b/docs/static/img/ds-lf-docs.png
deleted file mode 100644
index 46fc70429c86..000000000000
Binary files a/docs/static/img/ds-lf-docs.png and /dev/null differ
diff --git a/docs/static/img/ds-lf-zoom.png b/docs/static/img/ds-lf-zoom.png
deleted file mode 100644
index 53f78b4c616b..000000000000
Binary files a/docs/static/img/ds-lf-zoom.png and /dev/null differ
diff --git a/docs/static/img/hero.png b/docs/static/img/hero.png
deleted file mode 100644
index 3118ea3ecb43..000000000000
Binary files a/docs/static/img/hero.png and /dev/null differ
diff --git a/docs/static/img/integrations.png b/docs/static/img/integrations.png
deleted file mode 100644
index 8ded830775ea..000000000000
Binary files a/docs/static/img/integrations.png and /dev/null differ
diff --git a/docs/static/img/my-projects.png b/docs/static/img/my-projects.png
index 7c64d6002cd8..ddcee08614de 100644
Binary files a/docs/static/img/my-projects.png and b/docs/static/img/my-projects.png differ
diff --git a/docs/static/img/playground-response.png b/docs/static/img/playground-response.png
deleted file mode 100644
index 15f7cc4cfb57..000000000000
Binary files a/docs/static/img/playground-response.png and /dev/null differ
diff --git a/docs/static/img/prompt-component-with-multiple-inputs.png b/docs/static/img/prompt-component-with-multiple-inputs.png
index ca736500038c..29b47ae6d41a 100644
Binary files a/docs/static/img/prompt-component-with-multiple-inputs.png and b/docs/static/img/prompt-component-with-multiple-inputs.png differ
diff --git a/docs/static/img/prompt-component.png b/docs/static/img/prompt-component.png
index d56c3cd58bff..29b47ae6d41a 100644
Binary files a/docs/static/img/prompt-component.png and b/docs/static/img/prompt-component.png differ
diff --git a/docs/static/img/quickstart-simple-agent-flow.png b/docs/static/img/quickstart-simple-agent-flow.png
index 8f855dbb33b4..2e5fc57006c5 100644
Binary files a/docs/static/img/quickstart-simple-agent-flow.png and b/docs/static/img/quickstart-simple-agent-flow.png differ
diff --git a/docs/static/img/workspace-basic-prompting.png b/docs/static/img/workspace-basic-prompting.png
index c13ab8a26840..d9b4325ada2d 100644
Binary files a/docs/static/img/workspace-basic-prompting.png and b/docs/static/img/workspace-basic-prompting.png differ
diff --git a/docs/static/img/workspace.png b/docs/static/img/workspace.png
index 0d13ca2e82f3..eab12987d1f2 100644
Binary files a/docs/static/img/workspace.png and b/docs/static/img/workspace.png differ