diff --git a/docs/docs/API-Reference/api-files.mdx b/docs/docs/API-Reference/api-files.mdx
index 48b8abfb4856..996fd141e046 100644
--- a/docs/docs/API-Reference/api-files.mdx
+++ b/docs/docs/API-Reference/api-files.mdx
@@ -235,7 +235,7 @@ To send image files to your flows through the API, see [Upload image files (v1)]
:::
This endpoint uploads files to your Langflow server's file management system.
-To use an uploaded file in a flow, send the file path to a flow with a [**File** component](/components-data#file).
+To use an uploaded file in a flow, send the file path to a flow with a [**Read File** component](/components-data#file).
The default file limit is 1024 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).
@@ -265,10 +265,10 @@ The default file limit is 1024 MB. To configure this value, change the `LANGFLOW
}
```
-2. To use this file in your flow, add a **File** component to your flow.
+2. To use this file in your flow, add a **Read File** component to your flow.
This component loads files into flows from your local machine or Langflow file management.
-3. Run the flow, passing the `path` to the `File` component in the `tweaks` object:
+3. Run the flow, passing the `path` to the `Read-File` component in the `tweaks` object:
```text
curl --request POST \
@@ -280,7 +280,7 @@ This component loads files into flows from your local machine or Langflow file m
"output_type": "chat",
"input_type": "text",
"tweaks": {
- "File-1olS3": {
+ "Read-File-1olS3": {
"path": [
"07e5b864-e367-4f52-b647-a48035ae7e5e/3a290013-fe1e-4d3d-a454-cacae81288f3.pdf"
]
@@ -289,7 +289,7 @@ This component loads files into flows from your local machine or Langflow file m
}'
```
- To get the `File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
+ To get the `Read-File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
If the file path is valid, the flow runs successfully.
diff --git a/docs/docs/API-Reference/api-monitor.mdx b/docs/docs/API-Reference/api-monitor.mdx
index b0b62136db35..0f42edc6a17e 100644
--- a/docs/docs/API-Reference/api-monitor.mdx
+++ b/docs/docs/API-Reference/api-monitor.mdx
@@ -18,9 +18,9 @@ For more information, see the following:
The Vertex build endpoints (`/monitor/builds`) are exclusively for **Playground** functionality.
-When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L143). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.
+When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L130). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.
-The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.
+The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.
### Get Vertex builds
diff --git a/docs/docs/Agents/mcp-server.mdx b/docs/docs/Agents/mcp-server.mdx
index a916cdd5af7e..e49cffa82301 100644
--- a/docs/docs/Agents/mcp-server.mdx
+++ b/docs/docs/Agents/mcp-server.mdx
@@ -25,10 +25,20 @@ For information about using Langflow as an MCP client and managing MCP server co
## Serve flows as MCP tools {#select-flows-to-serve}
-Each [Langflow project](/concepts-flows#projects) has an MCP server that exposes the project's flows as tools for use by MCP clients.
+When you create a [Langflow project](/concepts-flows#projects), Langflow automatically adds the project to your MCP server's configuration and makes the project's flows available as MCP tools.
-By default, all flows in a project are exposed as tools on the project's MCP server.
-You can change the exposed flows and tool metadata by managing the MCP server settings:
+If your Langflow server has authentication enabled (`AUTO_LOGIN=false`), the project's MCP server is automatically configured with API key authentication, and a new API key is generated specifically for accessing the new project's flows.
+For more information, see [MCP server authentication](#authentication).
+
+
+### Prevent automatic MCP server configuration for Langflow projects
+
+To disable automatic MCP server configuration for new projects, set the `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` environment variable to `false`.
+For more information, see [MCP server environment variables](#mcp-server-environment-variables).
+
+### Selectively enable and disable MCP servers for Langflow projects
+
+With or without automatic MCP server configuration enabled, you can selectively enable and disable the projects that are exposed as MCP tools:
1. Click the **MCP Server** tab on the [**Projects** page](/concepts-flows#projects), or, when editing a flow, click **Share**, and then select **MCP Server**.
@@ -207,6 +217,8 @@ For more information, see the MCP documentation for your client, such as [Cursor
Each [Langflow project](/concepts-flows#projects) has its own MCP server with its own MCP server authentication settings.
+When you create a new project, Langflow automatically configures authentication for the project's MCP server based on your Langflow server's authentication settings. If authentication is enabled (`AUTO_LOGIN=false`), the project is automatically configured with API key authentication, and a new API key is generated for accessing the project's flows.
+
To configure authentication for a Langflow MCP server, go to the **Projects** page in Langflow, click the **MCP Server** tab, click **Edit Auth**, and then select your preferred authentication method:
@@ -287,6 +299,7 @@ The following environment variables set behaviors related to your Langflow proje
| `LANGFLOW_MCP_SERVER_ENABLE_PROGRESS_NOTIFICATIONS` | Boolean | `False` | If `true`, Langflow MCP servers send progress notifications. |
| `LANGFLOW_MCP_SERVER_TIMEOUT` | Integer | `20` | The number of seconds to wait before an MCP server operation expires due to poor connectivity or long-running requests. |
| `LANGFLOW_MCP_MAX_SESSIONS_PER_SERVER` | Integer | `10` | Maximum number of MCP sessions to keep per unique server. |
+| `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` | Boolean | `True` | Whether to automatically add newly created projects to the user's MCP servers configuration. If `false`, projects must be manually added to MCP servers. |
{/* The anchor on this section (deploy-your-server-externally) is currently a link target in the Langflow UI. Do not change. */}
### Deploy your Langflow MCP server externally {#deploy-your-server-externally}
diff --git a/docs/docs/Components/bundles-aiml.mdx b/docs/docs/Components/bundles-aiml.mdx
index ec2815347b66..d4da5126b10e 100644
--- a/docs/docs/Components/bundles-aiml.mdx
+++ b/docs/docs/Components/bundles-aiml.mdx
@@ -13,7 +13,7 @@ This page describes the components that are available in the **AI/ML** bundle.
## AI/ML API text generation
This component creates a `ChatOpenAI` model instance using the AI/ML API.
-The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Function** component.
+The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Transform** component.
For more information, see the [AI/ML API Langflow integration documentation](https://docs.aimlapi.com/integrations/langflow) and [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-amazon.mdx b/docs/docs/Components/bundles-amazon.mdx
index fd75ad8d67d1..a41ad01d5ad6 100644
--- a/docs/docs/Components/bundles-amazon.mdx
+++ b/docs/docs/Components/bundles-amazon.mdx
@@ -10,34 +10,39 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';
This page describes the components that are available in the **Amazon** bundle.
-## Amazon Bedrock
+## Amazon Bedrock Converse
-This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock).
+This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock) with the Bedrock Converse API.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Specifically, the **Language Model** output is an instance of [`ChatBedrock`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.
+Specifically, the **Language Model** output is an instance of [`ChatBedrockConverse`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
-### Amazon Bedrock parameters
+### Amazon Bedrock Converse parameters
| Name | Type | Description |
|------|------|-------------|
-| input | String | Input parameter. The input string for text generation. |
+| input_value | String | Input parameter. The input string for text generation. |
| system_message | String | Input parameter. A system message to pass to the model. |
| stream | Boolean | Input parameter. Whether to stream the response. Only works in chat. Default: `false`. |
-| model_id | String | Input parameter. The Amazon Bedrock model to use. |
-| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. |
-| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. |
-| aws_session_token | SecretString | Input parameter. The session key for your AWS account. |
-| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. |
+| model_id | String | Input parameter. The Amazon Bedrock model to use.|
+| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. Required. |
+| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. Required. |
+| aws_session_token | SecretString | Input parameter. The session key for your AWS account. Only needed for temporary credentials. |
+| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. If not provided, the default profile will be used. |
| region_name | String | Input parameter. AWS region where your Bedrock resources are located. Default: `us-east-1`. |
-| model_kwargs | Dictionary | Input parameter. Additional keyword arguments to pass to the model. |
| endpoint_url | String | Input parameter. Custom endpoint URL for a Bedrock service. |
+| temperature | Float | Input parameter. Controls randomness in output. Higher values make output more random. Default: `0.7`. |
+| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Default: `4096`. |
+| top_p | Float | Input parameter. Nucleus sampling parameter. Controls diversity of output. Default: `0.9`. |
+| top_k | Integer | Input parameter. Limits the number of highest probability vocabulary tokens to consider. Note: Not all models support top_k. Default: `250`. |
+| disable_streaming | Boolean | Input parameter. If True, disables streaming responses. Useful for batch processing. Default: `false`. |
+| additional_model_fields | Dictionary | Input parameter. Additional model-specific parameters for fine-tuning behavior. |
## Amazon Bedrock Embeddings
@@ -62,7 +67,7 @@ For more information about using embedding model components in flows, see [Embed
## S3 Bucket Uploader
The **S3 Bucket Uploader** component uploads files to an Amazon S3 bucket.
-It is designed to process `Data` input from a **File** or **Directory** component.
+It is designed to process `Data` input from a **Read File** or **Directory** component.
If you upload `Data` from other components, test the results before running the flow in production.
Requires the `boto3` package, which is included in your Langflow installation.
@@ -81,4 +86,22 @@ The component produces logs but it doesn't emit output to the flow.
| **Strategy for file upload** | String | Input parameter. The file upload strategy. **Store Data** (default) iterates over `Data` inputs, logs the file path and text content, and uploads each file to the specified S3 bucket if both file path and text content are available. **Store Original File** iterates through the list of data inputs, retrieves the file path from each data item, uploads the file to the specified S3 bucket if the file path is available, and logs the file path being uploaded. |
| **Data Inputs** | Data | Input parameter. The `Data` input to iterate over and upload as files in the specified S3 bucket. |
| **S3 Prefix** | String | Input parameter. Optional prefix (folder path) within the S3 bucket where files will be uploaded. |
-| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |
\ No newline at end of file
+| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |
+
+## Legacy Amazon components
+
+import PartialLegacy from '@site/docs/_partial-legacy.mdx';
+
+
+
+The following Amazon components are in legacy status:
+
+
+Amazon Bedrock
+
+The **Amazon Bedrock** component was deprecated in favor of the **Amazon Bedrock Converse** component, which uses the Bedrock Converse API for conversation handling.
+
+To use Amazon Bedrock models in your flows, use the [**Amazon Bedrock Converse**](#amazon-bedrock-converse) component instead.
+
+
+
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-anthropic.mdx b/docs/docs/Components/bundles-anthropic.mdx
index 1e0340ec212f..0c3f3f115319 100644
--- a/docs/docs/Components/bundles-anthropic.mdx
+++ b/docs/docs/Components/bundles-anthropic.mdx
@@ -19,7 +19,7 @@ The **Anthropic** component generates text using Anthropic Chat and Language mod
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatAnthropic`](https://docs.langchain.com/oss/python/integrations/chat/anthropic) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-azure.mdx b/docs/docs/Components/bundles-azure.mdx
index 2d5faf1d091b..3d20c8d0078f 100644
--- a/docs/docs/Components/bundles-azure.mdx
+++ b/docs/docs/Components/bundles-azure.mdx
@@ -17,7 +17,7 @@ This component generates text using [Azure OpenAI LLMs](https://learn.microsoft.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`AzureChatOpenAI`](https://docs.langchain.com/oss/python/integrations/chat/azure_chat_openai) configured according to the component's parameters.
-Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-baidu.mdx b/docs/docs/Components/bundles-baidu.mdx
index 9bffcc6a8d35..3dbffb20683f 100644
--- a/docs/docs/Components/bundles-baidu.mdx
+++ b/docs/docs/Components/bundles-baidu.mdx
@@ -15,6 +15,6 @@ The **Qianfan** component generates text using Qianfan's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models) and the [Qianfan documentation](https://github.com/baidubce/bce-qianfan-sdk).
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-cohere.mdx b/docs/docs/Components/bundles-cohere.mdx
index b01ee8248f4d..13f5cff22069 100644
--- a/docs/docs/Components/bundles-cohere.mdx
+++ b/docs/docs/Components/bundles-cohere.mdx
@@ -18,7 +18,7 @@ This component generates text using Cohere's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/_bundles-cometapi.mdx b/docs/docs/Components/bundles-cometapi.mdx
similarity index 100%
rename from docs/docs/Components/_bundles-cometapi.mdx
rename to docs/docs/Components/bundles-cometapi.mdx
diff --git a/docs/docs/Components/bundles-deepseek.mdx b/docs/docs/Components/bundles-deepseek.mdx
index 6922d4a0c965..73d9fd33c280 100644
--- a/docs/docs/Components/bundles-deepseek.mdx
+++ b/docs/docs/Components/bundles-deepseek.mdx
@@ -18,7 +18,7 @@ The **DeepSeek** component generates text using DeepSeek's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-docling.mdx b/docs/docs/Components/bundles-docling.mdx
index b057bbfd9eb3..6f1ac3e4105b 100644
--- a/docs/docs/Components/bundles-docling.mdx
+++ b/docs/docs/Components/bundles-docling.mdx
@@ -129,6 +129,42 @@ For more information, see the [Docling core project repository](https://github.c
| md_page_break_placeholder | String | Add this placeholder between pages in the markdown output. |
| doc_key | String | The key to use for the `DoclingDocument` column. |
+### Docling VLM pipeline with remote model {#docling-remote-vlm}
+
+The **Docling Remote VLM** component uses Docling to process input documents through a Vision Language Model (VLM) pipeline that runs a remote model.
+It supports both **IBM Cloud Watsonx** and **OpenAI-compatible** providers.
+
+This component enables document conversion, such as PDF to text or Markdown, using multimodal models hosted on remote APIs.
+
+It outputs `files`, which are processed files containing `DoclingDocument` data.
+
+For more information, see the [Docling VLM pipeline with API model example](https://docling-project.github.io/docling/examples/vlm_pipeline_api_model/).
+
+#### Docling VLM pipeline parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| files | File | The files to process. |
+| provider | String | Select which remote VLM provider to use (`IBM Cloud` or `OpenAI-Compatible`). |
+| vlm_prompt | String | Prompt text to send to the Vision-Language Model during processing. |
+
+#### IBM Cloud parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| watsonx_api_key | Secret String | IBM Cloud API key used for authentication (leave blank to load from `.env`). |
+| watsonx_project_id | String | The Watsonx project ID or deployment space ID associated with the model. |
+| url | String | The base URL of the Watsonx API, such as `https://us-south.ml.cloud.ibm.com`). |
+| model_name | String | Model name from the available Watsonx foundation models list. |
+
+#### OpenAI-Compatible parameters
+
+| Name | Type | Description |
+|------|------|-------------|
+| openai_base_url | String | Base URL for the OpenAI-compatible API, such as `https://openrouter.ai/api/`). |
+| openai_api_key | Secret String | API key for OpenAI-compatible endpoints. Leave this field blank if the key is not required. |
+| openai_model | String | Model ID for OpenAI-compatible provider, such as `gpt-4o-mini`). |
+
## See also
* [**File** component](/components-data#file)
\ No newline at end of file
diff --git a/docs/docs/Components/bundles-groq.mdx b/docs/docs/Components/bundles-groq.mdx
index f82e590620ee..28a6fbdf2449 100644
--- a/docs/docs/Components/bundles-groq.mdx
+++ b/docs/docs/Components/bundles-groq.mdx
@@ -18,7 +18,7 @@ This component generates text using Groq's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatGroq`](https://docs.langchain.com/oss/python/integrations/chat/groq) configured according to the component's parameters.
-Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-huggingface.mdx b/docs/docs/Components/bundles-huggingface.mdx
index e038bc440f9f..1a97f4637fb3 100644
--- a/docs/docs/Components/bundles-huggingface.mdx
+++ b/docs/docs/Components/bundles-huggingface.mdx
@@ -20,7 +20,7 @@ Authentication is required.
This component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatHuggingFace`](https://docs.langchain.com/oss/python/integrations/chat/huggingface) configured according to the component's parameters.
-Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-ibm.mdx b/docs/docs/Components/bundles-ibm.mdx
index 2042b31cc35c..28957d93f427 100644
--- a/docs/docs/Components/bundles-ibm.mdx
+++ b/docs/docs/Components/bundles-ibm.mdx
@@ -45,7 +45,7 @@ You can use the **IBM watsonx.ai** component anywhere you need a language model
The **IBM watsonx.ai** component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
The `LanguageModel` output from the **IBM watsonx.ai** component is an instance of `[ChatWatsonx](https://docs.langchain.com/oss/python/integrations/chat/ibm_watsonx)` configured according to the [component's parameters](#ibm-watsonxai-parameters).
diff --git a/docs/docs/Components/bundles-lmstudio.mdx b/docs/docs/Components/bundles-lmstudio.mdx
index a027f4be5f48..0e5fe18c99cf 100644
--- a/docs/docs/Components/bundles-lmstudio.mdx
+++ b/docs/docs/Components/bundles-lmstudio.mdx
@@ -17,7 +17,7 @@ The **LM Studio** component generates text using LM Studio's local language mode
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-maritalk.mdx b/docs/docs/Components/bundles-maritalk.mdx
index f4d60ec4cbcb..e01499dc1912 100644
--- a/docs/docs/Components/bundles-maritalk.mdx
+++ b/docs/docs/Components/bundles-maritalk.mdx
@@ -18,7 +18,7 @@ The **MariTalk** component generates text using MariTalk LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-mistralai.mdx b/docs/docs/Components/bundles-mistralai.mdx
index b6fcd9365c76..2a829649c9b8 100644
--- a/docs/docs/Components/bundles-mistralai.mdx
+++ b/docs/docs/Components/bundles-mistralai.mdx
@@ -18,7 +18,7 @@ The **MistralAI** component generates text using MistralAI LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-novita.mdx b/docs/docs/Components/bundles-novita.mdx
index fcd43a40877c..8ffe5e79e428 100644
--- a/docs/docs/Components/bundles-novita.mdx
+++ b/docs/docs/Components/bundles-novita.mdx
@@ -16,7 +16,7 @@ This component generates text using [Novita's language models](https://novita.ai
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-ollama.mdx b/docs/docs/Components/bundles-ollama.mdx
index 65c016b57648..8ffff1fa034e 100644
--- a/docs/docs/Components/bundles-ollama.mdx
+++ b/docs/docs/Components/bundles-ollama.mdx
@@ -32,7 +32,7 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn
5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model.
- Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. For more information, see [Language model components](/components-models).
+ Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models).
In the following example, the flow uses `LanguageModel` output to use an Ollama model as the LLM for an [**Agent** component](/components-agents).
diff --git a/docs/docs/Components/bundles-openai.mdx b/docs/docs/Components/bundles-openai.mdx
index 6e98e92df8d6..bbf735bb8f31 100644
--- a/docs/docs/Components/bundles-openai.mdx
+++ b/docs/docs/Components/bundles-openai.mdx
@@ -20,7 +20,7 @@ It provides access to the same OpenAI models that are available in the core **La
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-openrouter.mdx b/docs/docs/Components/bundles-openrouter.mdx
index e35c1c52782f..ae67e0d0d165 100644
--- a/docs/docs/Components/bundles-openrouter.mdx
+++ b/docs/docs/Components/bundles-openrouter.mdx
@@ -18,7 +18,7 @@ This component generates text using OpenRouter's unified API for multiple AI mod
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-perplexity.mdx b/docs/docs/Components/bundles-perplexity.mdx
index c46fd1230f1f..309c02d35eca 100644
--- a/docs/docs/Components/bundles-perplexity.mdx
+++ b/docs/docs/Components/bundles-perplexity.mdx
@@ -18,7 +18,7 @@ This component generates text using Perplexity's language models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-sambanova.mdx b/docs/docs/Components/bundles-sambanova.mdx
index 3bfc695e072b..37baeaca6bd9 100644
--- a/docs/docs/Components/bundles-sambanova.mdx
+++ b/docs/docs/Components/bundles-sambanova.mdx
@@ -18,7 +18,7 @@ This component generates text using SambaNova LLMs.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-vertexai.mdx b/docs/docs/Components/bundles-vertexai.mdx
index deda5a04ea5b..70cd4430aa87 100644
--- a/docs/docs/Components/bundles-vertexai.mdx
+++ b/docs/docs/Components/bundles-vertexai.mdx
@@ -20,7 +20,7 @@ The **Vertex AI** component generates text using Google Vertex AI models.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/bundles-xai.mdx b/docs/docs/Components/bundles-xai.mdx
index 896db788a093..01bc99026ad3 100644
--- a/docs/docs/Components/bundles-xai.mdx
+++ b/docs/docs/Components/bundles-xai.mdx
@@ -18,7 +18,7 @@ The **xAI** component generates text using xAI models like [Grok](https://x.ai/g
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
-Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
+Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.
For more information, see [Language model components](/components-models).
diff --git a/docs/docs/Components/components-custom-components.mdx b/docs/docs/Components/components-custom-components.mdx
index d58e30547a7b..c65574e7351e 100644
--- a/docs/docs/Components/components-custom-components.mdx
+++ b/docs/docs/Components/components-custom-components.mdx
@@ -6,128 +6,314 @@ slug: /components-custom-components
import Icon from "@site/src/components/icon";
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
+import PartialBasicComponentStructure from '../_partial-basic-component-structure.mdx';
-Custom components extend Langflow's functionality through Python classes that inherit from `Component`. This enables integration of new features, data manipulation, external services, and specialized tools.
+Create your own custom components to add any functionality you need to Langflow, from API integrations to data processing.
-In Langflow's node-based environment, each node is a "component" that performs discrete functions. Custom components are Python classes which define:
+In Langflow's node-based environment, each node is a "component" that performs discrete functions.
+Custom components in Langflow are built upon:
-* **Inputs** — Data or parameters your component requires.
-* **Outputs** — Data your component provides to downstream nodes.
-* **Logic** — How you process inputs to produce outputs.
+* The Python class that inherits from `Component`.
+* Class-level attributes that identify and describe the component.
+* [Input and output lists](#inputs-and-outputs) that determine data flow.
+* Methods that define the component's behavior and logic.
+* Internal variables for [Error handling and logging](#error-handling-and-logging)
-The benefits of creating custom components include unlimited extensibility, reusability, automatic field generation in the visual editor based on inputs, and type-safe connections between nodes.
+Use the [Custom component quickstart](#quickstart) to add an example component to Langflow, and then use the reference guide that follows for more advanced component customization.
-Create custom components for performing specialized tasks, calling APIs, or adding advanced logic.
+## Custom component quickstart {#quickstart}
-Custom components in Langflow are built upon:
+Create a custom `DataFrameProcessor` component by creating a Python file, saving it in the correct folder, including an `__init__.py` file, and loading it into Langflow.
-* The Python class that inherits from `Component`.
-* Class-level attributes that identify and describe the component.
-* Input and output lists that determine data flow.
-* Internal variables for logging and advanced logic.
+### Create a Python file
-## Class-level attributes
+
-Define these attributes to control a custom component's appearance and behavior:
+### Save the custom component {#custom-component-path}
-```python
-class MyCsvReader(Component):
- display_name = "CSV Reader"
- description = "Reads CSV files"
- icon = "file-text"
- name = "CSVReader"
- documentation = "http://docs.example.com/csv_reader"
+Save the custom component in the Langflow directory where the UI will discover and load it.
+
+By default, Langflow looks for custom components in the `src/lfx/src/lfx/components` directory.
+
+When saving components in the default directory, components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor.
+
+Components must be placed inside category folders, not directly in the base directory.
+
+The category folder name determines where the component appears in the Langflow **Core components** menu.
+For example, to add the example `DataFrameProcessor` component to the **Data** category, place it in the `data` subfolder:
+
+```
+src/lfx/src/lfx/components/
+ └── data/ # Category folder (determines menu location)
+ ├── __init__.py # Required - makes it a Python package
+ └── dataframe_processor.py # Your custom component file
```
-* `display_name`: A user-friendly label shown in the visual editor.
-* `description`: A brief summary shown in tooltips and printed below the component name when added to a flow.
-* `icon`: A decorative icon from Langflow's icon library, printed next to the name.
+If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be similarly organized in a specific directory structure to be displayed in the visual editor.
- Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
+```
+/your/custom/components/path/ # Base directory set by LANGFLOW_COMPONENTS_PATH
+ └── category_name/
+ ├── __init__.py
+ └── custom_component.py
+```
+
+You can have multiple category folders to organize components into different categories:
+```
+/app/custom_components/
+ ├── data/
+ │ ├── __init__.py
+ │ └── dataframe_processor.py
+ └── tools/
+ ├── __init__.py
+ └── custom_tool.py
+```
-* `name`: A unique internal identifier, typically the same name as the folder containing your component code.
-* `documentation`: An optional link to external documentation, such as API or product documentation.
+### Create the `__init__.py` file
+
+Each category directory **must** contain an `__init__.py` file for Langflow to properly recognize and load the components.
+This is a Python package requirement that ensures the directory is treated as a module.
-### Structure of a custom component
+To include the `DataFrameProcessor` component, create a file named `__init__.py` in your component's directory with the following content.
-A Langflow custom component is more than a class with inputs and outputs. It includes an internal structure with optional lifecycle steps, output generation, front-end interaction, and logic organization.
+```python
+from .dataframe_processor import DataFrameProcessor
-A basic component:
+__all__ = ["DataFrameProcessor"]
+```
-* Inherits from `langflow.custom.Component`.
-* Declares metadata like `display_name`, `description`, `icon`, and more.
-* Defines `inputs` and `outputs` lists.
-* Implements methods matching output specifications.
+
+Lazy load the DataFrameProcessor component
-A minimal custom component skeleton contains the following:
+Alternatively, you can load your component **lazily**, which is better for performance but a little more complex.
```python
-from langflow.custom import Component
-from langflow.template import Output
+from __future__ import annotations
+
+from typing import TYPE_CHECKING, Any
-class MyComponent(Component):
- display_name = "My Component"
- description = "A short summary."
- icon = "sparkles"
- name = "MyComponent"
+from lfx.components._importing import import_mod
- inputs = []
- outputs = []
+if TYPE_CHECKING:
+ from lfx.components.data.dataframe_processor import DataFrameProcessor
+
+_dynamic_imports = {
+ "DataFrameProcessor": "dataframe_processor",
+}
+
+__all__ = [
+ "DataFrameProcessor",
+]
+
+def __getattr__(attr_name: str) -> Any:
+ """Lazily import data components on attribute access."""
+ if attr_name not in _dynamic_imports:
+ msg = f"module '{__name__}' has no attribute '{attr_name}'"
+ raise AttributeError(msg)
+ try:
+ result = import_mod(attr_name, _dynamic_imports[attr_name], __spec__.parent)
+ except (ModuleNotFoundError, ImportError, AttributeError) as e:
+ msg = f"Could not import '{attr_name}' from '{__name__}': {e}"
+ raise AttributeError(msg) from e
+ globals()[attr_name] = result
+ return result
+
+def __dir__() -> list[str]:
+ return list(__all__)
+```
+
+For an additional example of lazy loading, see the [FAISS component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/FAISS/__init__.py).
+
+
+
+### Load your component
+
+Ensure the application builds your component.
+
+1. To rebuild the backend and frontend, run `make install_frontend && make build_frontend && make install_backend && uv run langflow run --port 7860`.
+
+2. Refresh the frontend application.
+Your new `DataFrameProcessor` component is available in the **Core components** menu under the **Data** category in the visual editor.
+
+### Docker deployment
+
+When running Langflow in Docker, mount your custom components directory and set the `LANGFLOW_COMPONENTS_PATH` environment variable in the `docker run` command to point to the custom components directory.
+
+```bash
+docker run -d \
+ --name langflow \
+ -p 7860:7860 \
+ -v ./custom_components:/app/custom_components \
+ -e LANGFLOW_COMPONENTS_PATH=/app/custom_components \
+ langflowai/langflow:latest
+```
+
+Create the same custom components directory structure as the example in [Save the custom component](#custom-component-path).
- def some_output_method(self):
- return ...
```
-### Internal Lifecycle and Execution Flow
+/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
+ └── data/
+ ├── __init__.py
+ └── dataframe_processor.py
+```
+
+## How components execute
Langflow's engine manages:
-* **Instantiation**: A component is created and internal structures are initialized.
-* **Assigning Inputs**: Values from the visual editor or connections are assigned to component fields.
-* **Validation and Setup**: Optional hooks like `_pre_run_setup`.
-* **Outputs Generation**: `run()` or `build_results()` triggers output methods.
+1. **Instantiation**: A component is created and internal structures are initialized.
+2. **Assigning Inputs**: Values from the visual editor or connections are assigned to component fields.
+3. **Validation and Setup**: Optional hooks like `_pre_run_setup`.
+4. **Outputs Generation**: `run()` or `build_results()` triggers output methods.
-**Optional Hooks**:
+You can customize execution by overriding these optional hooks in your custom component code.
+
+* **`_pre_run_setup()`** - Used during **Validation and Setup**.
+ Add this method inside your component class to initialize component state before execution begins:
+ ```python
+ class MyComponent(Component):
+ # ... your inputs, outputs, and other attributes ...
+
+ def _pre_run_setup(self):
+ if not hasattr(self, "_initialized"):
+ self._initialized = True
+ self.iteration = 0
+ ```
-* `initialize_data` or `_pre_run_setup` can run setup logic before the component's main execution.
-* `__call__`, `run()`, or `_run()` can be overridden to customize how the component is called or to define custom execution logic.
+* **Override `run` or `_run`** - Used during **Outputs Generation**.
+ Add this method inside your component class to customize the main execution logic:
+ ```python
+ class MyComponent(Component):
-### Inputs and outputs
+ async def_run(self):
+ # Custom execution logic here
+ # This runs instead of the default output method calls
+ pass
+ ```
-Custom component inputs are defined with properties like:
+* **Store data in `self.ctx`**.
+ Use `self.ctx` in any of your component methods to share data between method calls.
+ ```python
+ class MyComponent(Component):
-* `name`, `display_name`
-* Optional: `info`, `value`, `advanced`, `is_list`, `tool_mode`, `real_time_refresh`
+ def _pre_run_setup(self):
+ # Initialize counter in setup
+ self.ctx["processed_items"] = 0
-For example:
+ def process_data(self) -> Data:
+ # Increment counter during processing
+ self.ctx["processed_items"] += 1
+ return Data(data={"item": f"processed {self.ctx['processed_items']}"})
-* `StrInput`: simple text input.
-* `DropdownInput`: selectable options.
-* `HandleInput`: specialized connections.
+ def get_summary(self) -> Data:
+ # Access counter in different method
+ total = self.ctx["processed_items"]
+ return Data(data={"summary": f"Processed {total} items total"})
+ ```
-Custom component `Output` properties define:
+## Inputs and outputs
-* `name`, `display_name`, `method`
-* Optional: `info`
+Inputs and outputs are **class-level configurations** that define how data flows through the component, how it appears in the visual editor, and how connections to other components are validated.
-For more information, see [Custom component inputs and outputs](/components-custom-components#custom-component-inputs-and-outputs).
+### Inputs
-### Associated Methods
+Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the visual editor. Users or other components provide values or connections to fill these inputs.
-Each output is linked to a method:
+An input is usually an instance of a class from `langflow.io` (such as `StrInput`, `DataInput`, or `MessageTextInput`).
-* The output method name must match the method name.
-* The method typically returns objects like Message, Data, or DataFrame.
-* The method can use inputs with `self.`.
+For example, this component has three inputs: a text field (`StrInput`), a Boolean toggle (`BoolInput`), and a dropdown selection (`DropdownInput`).
-For example:
+```python
+from langflow.io import StrInput, BoolInput, DropdownInput
+
+inputs = [
+ StrInput(name="title", display_name="Title"),
+ BoolInput(name="enabled", display_name="Enabled", value=True),
+ DropdownInput(name="mode", display_name="Mode", options=["Fast", "Safe", "Experimental"], value="Safe")
+]
+```
+
+The `StrInput` creates a single-line text field for entering text. The `name="title"` parameter means you access this value in your component methods with `self.title`, while `display_name="Title"` shows "Title" as the label in the visual editor.
+
+The `BoolInput` creates a boolean toggle that's enabled by default with `value=True`. Users can turn this on or off, and you access the current state with `self.enabled`.
+
+The `DropdownInput` provides a selection menu with three predefined options: "Fast", "Safe", and "Experimental".
+The `value="Safe"` sets "Safe" as the default selection, and you access the user's choice with `self.mode`.
+
+
+**Additional parameters:**
+* **`name`** - Internal variable name (accessed with `self.`)
+* **`display_name`** - Label shown in the visual editor
+* **`value`** - Default value
+* **`info`** - Tooltip or description
+* **`required`** - Force user to provide a value
+* **`advanced`** - Move field to "Advanced" section
+* **`is_list`** - Allow multiple values
+
+**Additional input types:**
+* **Text**: `MultilineInput` (multi-line text area)
+* **Numbers**: `IntInput`, `FloatInput`
+* **Secrets**: `SecretStrInput` (hidden in UI)
+* **Data**: `DataInput`, `MessageInput`, `MessageTextInput`
+* **Files**: `FileInput`
+* **Connections**: `HandleInput`
+
+
+### Outputs
+
+Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the visual editor. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
+
+An output is usually an instance of `Output` from `langflow.io`.
+
+For example, this component has one `output` that returns a `DataFrame`:
+
+```python
+from langflow.io import Output
+from langflow.schema import DataFrame
+
+outputs = [
+ Output(
+ name="df_out",
+ display_name="DataFrame Output",
+ method="build_df"
+ )
+]
+
+def build_df(self) -> DataFrame:
+ # Process data and return DataFrame
+ df = DataFrame({"col1": [1, 2], "col2": [3, 4]})
+ self.status = f"Built DataFrame with {len(df)} rows."
+ return df
+```
+
+The `Output` creates a connector point in the visual editor labeled **DataFrame Output**. The `name="df_out"` parameter identifies this output, while `display_name="DataFrame Output"` shows the label in the UI. The `method="build_df"` parameter tells Langflow to call the `build_df` method when this output is connected to another component.
+
+The `build_df` method processes data and returns a `DataFrame`. The `-> DataFrame` type annotation helps Langflow validate connections and provides color-coding in the visual editor. You can also set `self.status` to show progress messages in the UI.
+
+**Additional parameters:**
+* **`name`** - Internal identifier for the output
+* **`display_name`** - Label shown in the visual editor
+* **`method`** - Name of the method called to produce the output
+* **`info`** - Help text shown on hover
+
+**Additional return types:**
+* **`Message`** - Structured chat messages
+* **`Data`** - Flexible object with `.data` and optional `.text`
+* **Primitive types** - `str`, `int`, `bool`, not recommended for type consistency
+
+#### Associated Methods
+
+Each output is linked to a method where the output method name must match the method name. The method typically returns objects like `Message`, `Data`, or `DataFrame`, and can use inputs with `self.`.
+
+For example, the `Output` defines a connector point called `file_contents` that will call the `read_file` method when connected. The `read_file` method accesses the filename input with `self.filename`, reads the file content, sets a status message, and returns the content wrapped in a `Data` object.
```python
Output(
- display_name="File Contents",
name="file_contents",
+ display_name="File Contents",
method="read_file"
)
-#...
+
def read_file(self) -> Data:
path = self.filename
with open(path, "r") as f:
@@ -136,12 +322,13 @@ def read_file(self) -> Data:
return Data(data={"content": content})
```
-### Components with multiple outputs
+
+#### Components with multiple outputs
A component can define multiple outputs.
Each output can have a different corresponding method.
-For example:
+For example:
```python
outputs = [
Output(display_name="Processed Data", name="processed_data", method="process_data"),
@@ -149,8 +336,6 @@ outputs = [
]
```
-#### Output Grouping Behavior with `group_outputs`
-
By default, components in Langflow that produce multiple outputs only allow one output selection in the visual editor.
The component will have only one output port where the user can select the preferred output type.
@@ -168,7 +353,7 @@ This behavior is controlled by the `group_outputs` parameter:
In this example, the visual editor provides a single output port, and the user can select one of the outputs.
-Since `group_outputs=False` is the default behavior, it doesn't need to be explicitly set in the component, as shown in this example:
+Since `group_outputs=False` is the default behavior, it doesn't need to be explicitly set in the component, as shown in this example.
```python
outputs = [
@@ -188,9 +373,7 @@ outputs = [
-In this example, all outputs are available simultaneously in the visual editor:
-
-2. `group_outputs=True`
+In this example, all outputs are available simultaneously in the visual editor.
```python
outputs = [
@@ -212,200 +395,7 @@ outputs = [
-### Common internal patterns
-
-#### `_pre_run_setup()`
-
-To initialize a custom component with counters set:
-
-```python
-def _pre_run_setup(self):
- if not hasattr(self, "_initialized"):
- self._initialized = True
- self.iteration = 0
-```
-
-#### Override `run` or `_run`
-You can override `async def _run(self): ...` to define custom execution logic, although the default behavior from the base class usually covers most cases.
-
-#### Store data in `self.ctx`
-Use `self.ctx` as a shared storage for data or counters across the component's execution flow:
-
-```python
-def some_method(self):
- count = self.ctx.get("my_count", 0)
- self.ctx["my_count"] = count + 1
-```
-
-## Directory structure requirements
-
-By default, Langflow looks for custom components in the `/components` directory.
-
-If you're creating custom components in a different location using the `LANGFLOW_COMPONENTS_PATH` [environment variable](/environment-variables), components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor:
-
-Each category directory **must** contain an `__init__.py` file for Langflow to properly recognize and load the components.
-This is a Python package requirement that ensures the directory is treated as a module.
-
-```
-/your/custom/components/path/ # Base directory set by LANGFLOW_COMPONENTS_PATH
- └── category_name/ # Required category subfolder that determines menu name
- ├── __init__.py # Required
- └── custom_component.py # Component file
-```
-
-Components must be placed inside category folders, not directly in the base directory.
-
-The category folder name determines where the component appears in the Langflow **Core components** menu.
-For example, to add a component to the **Helpers** category, place it in the `helpers` subfolder:
-
-```
-/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
- └── helpers/ # Displayed within the "Helpers" category
- ├── __init__.py # Required
- └── custom_component.py # Your component
-```
-
-You can have multiple category folders to organize components into different categories:
-```
-/app/custom_components/
- ├── helpers/
- │ ├── __init__.py
- │ └── helper_component.py
- └── tools/
- ├── __init__.py
- └── tool_component.py
-```
-
-This folder structure is required for Langflow to properly discover and load your custom components. Components placed directly in the base directory aren't loaded.
-
-```
-/app/custom_components/ # LANGFLOW_COMPONENTS_PATH
- └── custom_component.py # Won't be loaded - missing category folder!
-```
-
-## Custom component inputs and outputs
-
-Inputs and outputs define how data flows through the component, how it appears in the visual editor, and how connections to other components are validated.
-
-### Inputs
-
-Inputs are defined in a class-level `inputs` list. When Langflow loads the component, it uses this list to render component fields and [ports](/concepts-components#component-ports) in the visual editor. Users or other components provide values or connections to fill these inputs.
-
-An input is usually an instance of a class from `langflow.io` (such as `StrInput`, `DataInput`, or `MessageTextInput`). The most common constructor parameters are:
-
-* **`name`**: The internal variable name, accessed with `self.`.
-* **`display_name`**: The label shown to users in the visual editor.
-* **`info`** *(optional)*: A tooltip or short description.
-* **`value`** *(optional)*: The default value.
-* **`advanced`** *(optional)*: If `true`, moves the field into the "Advanced" section.
-* **`required`** *(optional)*: If `true`, forces the user to provide a value.
-* **`is_list`** *(optional)*: If `true`, allows multiple values.
-* **`input_types`** *(optional)*: Restricts allowed connection types (e.g., `["Data"]`, `["LanguageModel"]`).
-
-Here are the most commonly used input classes and their typical usage.
-
-**Text Inputs**: For simple text entries.
-* **`StrInput`** creates a single-line text field.
-* **`MultilineInput`** creates a multi-line text area.
-
-**Numeric and Boolean Inputs**: Ensures users can only enter valid numeric or Boolean data.
-* **`BoolInput`**, **`IntInput`**, and **`FloatInput`** provide fields for Boolean, integer, and float values, ensuring type consistency.
-
-**Dropdowns**: For selecting from predefined options, useful for modes or levels.
-* **`DropdownInput`**
-
-**Secrets**: A specialized input for sensitive data, ensuring input is hidden in the visual editor.
-* **`SecretStrInput`** for API keys and passwords.
-
-**Specialized Data Inputs**: Ensures type-checking and color-coded connections in the visual editor.
-* **`DataInput`** expects a `Data` object (typically with `.data` and optional `.text`).
-* **`MessageInput`** expects a `Message` object, used in chat or agent flows.
-* **`MessageTextInput`** simplifies access to the `.text` field of a `Message`.
-
-**Handle-Based Inputs**: Used to connect outputs of specific types, ensuring correct pipeline connections.
-- **`HandleInput`**
-
-**File Uploads**: Allows users to upload files directly through the visual editor or receive file paths from other components.
-- **`FileInput`**
-
-**Lists**: Set `is_list=True` to accept multiple values, ideal for batch or grouped operations.
-
-This example defines three inputs: a text field (`StrInput`), a Boolean toggle (`BoolInput`), and a dropdown selection (`DropdownInput`).
-
-```python
-from langflow.io import StrInput, BoolInput, DropdownInput
-
-inputs = [
- StrInput(name="title", display_name="Title"),
- BoolInput(name="enabled", display_name="Enabled", value=True),
- DropdownInput(name="mode", display_name="Mode", options=["Fast", "Safe", "Experimental"], value="Safe")
-]
-```
-
-### Outputs
-
-Outputs are defined in a class-level `outputs` list. When Langflow renders a component, each output becomes a connector point in the visual editor. When you connect something to an output, Langflow automatically calls the corresponding method and passes the returned object to the next component.
-
-An output is usually an instance of `Output` from `langflow.io`, with common parameters:
-
-* **`name`**: The internal variable name.
-* **`display_name`**: The label shown in the visual editor.
-* **`method`**: The name of the method called to produce the output.
-* **`info`** *(optional)*: Help text shown on hover.
-
-The method must exist in the class, and it is recommended to annotate its return type for better type checking.
-You can also set a `self.status` message inside the method to show progress or logs.
-
-**Common Return Types**:
-- **`Message`**: Structured chat messages.
-- **`Data`**: Flexible object with `.data` and optional `.text`.
-- **`DataFrame`**: Pandas-based tables (`langflow.schema.DataFrame`).
-- **Primitive types**: `str`, `int`, `bool` (not recommended if you need type/color consistency).
-
-In this example, the `DataToDataFrame` component defines its output using the outputs list. The `df_out` output is linked to the `build_df` method, so when connected to another component (node), Langflow calls this method and passes its returned `DataFrame` to the next node. This demonstrates how each output maps to a method that generates the actual output data.
-
-```python
-from langflow.custom import Component
-from langflow.io import DataInput, Output
-from langflow.schema import Data, DataFrame
-
-class DataToDataFrame(Component):
- display_name = "Data to DataFrame"
- description = "Convert multiple Data objects into a DataFrame"
- icon = "table"
- name = "DataToDataFrame"
-
- inputs = [
- DataInput(
- name="items",
- display_name="Data Items",
- info="List of Data objects to convert",
- is_list=True
- )
- ]
-
- outputs = [
- Output(
- name="df_out",
- display_name="DataFrame Output",
- method="build_df"
- )
- ]
-
- def build_df(self) -> DataFrame:
- rows = []
- for item in self.items:
- row_dict = item.data.copy() if item.data else {}
- row_dict["text"] = item.get_text() or ""
- rows.append(row_dict)
-
- df = DataFrame(rows)
- self.status = f"Built DataFrame with {len(rows)} rows."
- return df
-```
-
-
-### Tool Mode
+### Tool mode
Components that support **Tool Mode** can be used as standalone components (when _not_ in **Tool Mode**) or as tools for other components with a **Tools** input, such as **Agent** components.
@@ -422,73 +412,65 @@ inputs = [
]
```
-Langflow currently supports the following input types for **Tool Mode**:
+## Typed annotations
-* `DataInput`
-* `DataFrameInput`
-* `PromptInput`
-* `MessageTextInput`
-* `MultilineInput`
-* `DropdownInput`
+In Langflow, typed annotations allow Langflow to visually guide users and maintain flow consistency.
+Always annotate your output methods with return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper visual editor color-coding and validation.
+Use `Data`, `Message`, or `DataFrame` wrappers instead of returning plain structures for better consistency. Stay consistent with types across your components to make flows predictable and easier to build.
-## Typed annotations
+Typed annotations provide color-coding where outputs like `-> Data` or `-> Message` get distinct colors, automatic validation that blocks incompatible connections, and improved readability for users to quickly understand data flow between components.
-In Langflow, **typed annotations** allow Langflow to visually guide users and maintain flow consistency.
+### Common return types
-Typed annotations provide:
+
+
-* **Color-coding**: Outputs like `-> Data` or `-> Message` get distinct colors.
-* **Validation**: Langflow blocks incompatible connections automatically.
-* **Readability**: Developers can quickly understand data flow.
-* **Development tools**: Better code suggestions and error checking in your code editor.
+For chat-style outputs. Connects to any of several `Message`-compatible inputs.
-### Common Return Types
+```python
+def produce_message(self) -> Message:
+ return Message(text="Hello! from typed method!", sender="System")
+```
-* `Message`: For chat-style outputs. Connects to any of several `Message`-compatible inputs.
+
+
- ```python
- def produce_message(self) -> Message:
- return Message(text="Hello! from typed method!", sender="System")
- ```
+For structured data like dicts or partial texts. Connects only to `DataInput` (ports that accept `Data`).
-* `Data`: For structured data like dicts or partial texts. Connects only to `DataInput` (ports that accept `Data`).
+```python
+def get_processed_data(self) -> Data:
+ processed = {"key1": "value1", "key2": 123}
+ return Data(data=processed)
+```
- ```python
- def get_processed_data(self) -> Data:
- processed = {"key1": "value1", "key2": 123}
- return Data(data=processed)
- ```
+
+
-* `DataFrame`: For tabular data. Connects only to `DataFrameInput` (ports that accept `DataFrame`).
+For tabular data. Connects only to `DataFrameInput` (ports that accept `DataFrame`).
- ```python
- def build_df(self) -> DataFrame:
- pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
- return DataFrame(pdf)
- ```
+```python
+def build_df(self) -> DataFrame:
+ pdf = pd.DataFrame({"A": [1, 2], "B": [3, 4]})
+ return DataFrame(pdf)
+```
-* Primitive Types (`str`, `int`, `bool`): Returning primitives is allowed but wrapping in `Data` or `Message` is recommended for better consistency in the visual editor.
+
+
- ```python
- def compute_sum(self) -> int:
- return sum(self.numbers)
- ```
+Returning primitives is allowed, but wrapping in `Data` or `Message` is recommended for better consistency in the visual editor.
-### Tips for typed annotations
+```python
+def compute_sum(self) -> int:
+ return sum(self.numbers)
+```
-When using typed annotations, consider the following best practices:
+
+
-* **Always Annotate Outputs**: Specify return types like `-> Data`, `-> Message`, or `-> DataFrame` to enable proper visual editor color-coding and validation.
-* **Wrap Raw Data**: Use `Data`, `Message`, or `DataFrame` wrappers instead of returning plain structures.
-* **Use Primitives Carefully**: Direct `str` or `int` returns are fine for simple flows, but wrapping improves flexibility.
-* **Annotate Helpers Too**: Even if internal, typing improves maintainability and clarity.
-* **Handle Edge Cases**: Prefer returning structured `Data` with error fields when needed.
-* **Stay Consistent**: Use the same types across your components to make flows predictable and easier to build.
## Enable dynamic fields
-In **Langflow**, dynamic fields allow inputs to change or appear based on user interactions. You can make an input dynamic by setting `dynamic=True`.
-Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual visual editor experience that only exposes relevant fields based on the user's choices.
+In **Langflow**, dynamic fields allow inputs to change or appear based on user interactions. You can make an input dynamic by setting `dynamic=True`. Optionally, setting `real_time_refresh=True` triggers the `update_build_config` method to adjust the input's visibility or properties in real time, creating a contextual visual editor experience that only exposes relevant fields based on the user's choices.
In this example, the operator field triggers updates with `real_time_refresh=True`.
The `regex_pattern` field is initially hidden and controlled with `dynamic=True`.
@@ -518,11 +500,13 @@ class RegexRouter(Component):
]
```
-### Implement `update_build_config`
+### Show or hide fields based on user selections
-When a field with `real_time_refresh=True` is modified, Langflow calls the `update_build_config` method, passing the updated field name, value, and the component's configuration to dynamically adjust the visibility or properties of other fields based on user input.
+When a user changes a field with `real_time_refresh=True`, Langflow calls your `update_build_config` method.
-This example will show or hide the `regex_pattern` field when the user selects a different operator.
+This method lets you show, hide, or modify other fields based on what the user selected.
+
+This example shows the `regex_pattern` field only when the user selects "regex" from the operator dropdown.
```python
def update_build_config(self, build_config: dict, field_value: str, field_name: str | None = None) -> dict:
@@ -534,89 +518,84 @@ def update_build_config(self, build_config: dict, field_value: str, field_name:
return build_config
```
-### Additional Dynamic Field Controls
-
-You can also modify other properties within `update_build_config`, such as:
-* `required`: Set `build_config["some_field"]["required"] = True/False`
-
-* `advanced`: Set `build_config["some_field"]["advanced"] = True`
-
-* `options`: Modify dynamic dropdown options.
-
-### Tips for Managing Dynamic Fields
-
-When working with dynamic fields, consider the following best practices to ensure a smooth user experience:
-
-* **Minimize field changes**: Hide only fields that are truly irrelevant to avoid confusing users.
-* **Test behavior**: Ensure that adding or removing fields doesn't accidentally erase user input.
-* **Preserve data**: Use `build_config["some_field"]["show"] = False` to hide fields without losing their values.
-* **Clarify logic**: Add `info` notes to explain why fields appear or disappear based on conditions.
-* **Keep it manageable**: If the dynamic logic becomes too complex, consider breaking it into smaller components, unless it serves a clear purpose in a single node.
-
+You can modify additional field properties in `update_build_config` other than just `show` and `hide`.
+
+* **`required`**: Make fields required or optional dynamically
+ ```python
+ if field_value == "regex":
+ build_config["regex_pattern"]["required"] = True
+ else:
+ build_config["regex_pattern"]["required"] = False
+ ```
+
+* **`advanced`**: Move fields to the "Advanced" section
+ ```python
+ if field_value == "experimental":
+ build_config["regex_pattern"]["advanced"] = False # Show in main section
+ else:
+ build_config["regex_pattern"]["advanced"] = True # Hide in advanced
+ ```
+
+* **`options`**: Change dropdown options based on other selections
+ ```python
+ if field_value == "regex":
+ build_config["operator"]["options"] = ["regex", "contains", "starts_with"]
+ else:
+ build_config["operator"]["options"] = ["equals", "contains", "not_equals"]
+ ```
## Error handling and logging
-In Langflow, robust error handling ensures that your components behave predictably, even when unexpected situations occur, such as invalid inputs, external API failures, or internal logic errors.
+You can raise standard Python exceptions such as `ValueError` or specialized exceptions like `ToolException` when validation fails. Langflow automatically catches these and displays appropriate error messages in the visual editor, helping users quickly identify what went wrong.
-### Error handling techniques
+```python
+def compute_result(self) -> str:
+ if not self.user_input:
+ raise ValueError("No input provided.")
+ # ...
+```
-* **Raise Exceptions**: If a critical error occurs, you can raise standard Python exceptions such as `ValueError`, or specialized exceptions like `ToolException`. Langflow will automatically catch these and display appropriate error messages in the visual editor, helping users quickly identify what went wrong.
+Alternatively, instead of stopping a flow abruptly, you can return a `Data` object containing an `"error"` field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
- ```python
- def compute_result(self) -> str:
- if not self.user_input:
- raise ValueError("No input provided.")
+```python
+def run_model(self) -> Data:
+ try:
# ...
- ```
-
-* **Return Structured Error Data**: Instead of stopping a flow abruptly, you can return a Data object containing an "error" field. This approach allows the flow to continue operating and enables downstream components to detect and handle the error gracefully.
-
- ```python
- def run_model(self) -> Data:
- try:
- # ...
- except Exception as e:
- return Data(data={"error": str(e)})
- ```
-
-### Improve debugging and flow management
+ except Exception as e:
+ return Data(data={"error": str(e)})
+```
-* **Use `self.status`**: Each component has a status field where you can store short messages about the execution result—such as success summaries, partial progress, or error notifications. These appear directly in the visual editor, making troubleshooting easier for users.
+Langflow provides several tools to help you debug and manage component execution. You can use `self.status` to display short messages about execution results directly in the visual editor, making troubleshooting easier for users.
- ```python
- def parse_data(self) -> Data:
- # ...
- self.status = f"Parsed {len(rows)} rows successfully."
- return Data(data={"rows": rows})
- ```
+```python
+def parse_data(self) -> Data:
+# ...
+self.status = f"Parsed {len(rows)} rows successfully."
+return Data(data={"rows": rows})
+```
-* **Stop specific outputs with `self.stop(...)`**: You can halt individual output paths when certain conditions fail, without affecting the entire component. This is especially useful when working with components that have multiple output branches.
+You can halt individual output paths when certain conditions fail using `self.stop()`, without stopping other outputs from the same component.
- ```python
- def some_output(self) -> Data:
- if :
- self.stop("some_output") # Tells Langflow no data flows
- return Data(data={"error": "Condition not met"})
- ```
+This example stops the output if the user input is empty, preventing the component from processing invalid data.
-* **Log events**: You can log key execution details inside components. Logs are displayed in the "Logs" or "Events" section of the component's detail view and can be accessed later through the flow's debug panel or exported files, providing a clear trace of the component's behavior for easier debugging.
+```python
+def some_output(self) -> Data:
+if not self.user_input or len(self.user_input.strip()) == 0:
+ self.stop("some_output")
+ return Data(data={"error": "Empty input provided"})
+```
- ```python
- def process_file(self, file_path: str):
- self.log(f"Processing file {file_path}")
- # ...
- ```
+You can log key execution details inside components using `self.log()`. These logs are stored as structured data and displayed in the "Logs" or "Events" section of the component's detail view, and can be accessed later through the **Logs** button in the visual editor or exported files.
-### Tips for error handling and logging
+Component logs are distinct from Langflow's main application logging system. `self.log()` creates component-specific logs that appear in the UI, while Langflow's main logging system uses [structlog](https://www.structlog.org) for application-level logging that outputs to `langflow.log` files. For more information, see [Logs](/logging).
-To build more reliable components, consider the following best practices:
+This example logs a message when the component starts processing a file.
-* **Validate inputs early**: Catch missing or invalid inputs at the start to prevent broken logic.
-* **Summarize with `self.status`**: Use short success or error summaries to help users understand results quickly.
-* **Keep logs concise**: Focus on meaningful messages to avoid cluttering the visual editor.
-* **Return structured errors**: When appropriate, return `Data(data={"error": ...})` instead of raising exceptions to allow downstream handling.
-* **Stop outputs selectively**: Only halt specific outputs with `self.stop(...)` if necessary, to preserve correct flow behavior elsewhere.
+```python
+def process_file(self, file_path: str):
+self.log(f"Processing file {file_path}")
+```
## Contribute custom components to Langflow
-See [How to Contribute](/contributing-components) to contribute your custom component to Langflow.
\ No newline at end of file
+To contribute your custom component to the Langflow project, see [Contribute components](/contributing-components).
\ No newline at end of file
diff --git a/docs/docs/Components/components-data.mdx b/docs/docs/Components/components-data.mdx
index b5b4682339c9..30537f809f29 100644
--- a/docs/docs/Components/components-data.mdx
+++ b/docs/docs/Components/components-data.mdx
@@ -12,7 +12,7 @@ import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx';
Data components bring data into your flows from various sources like files, API endpoints, and URLs.
For example:
-* **Load files**: Import data from a file or directory with the [**File** component](#file) and [**Directory** component](#directory).
+* **Load files**: Import data from a file or directory with the [**Read File** component](#file) and [**Directory** component](#directory).
* **Search the web**: Fetch data from the web with components like the [**News Search** component](#news-search), [**RSS Reader** component](#rss-reader), [**Web Search** component](#web-search), and [**URL** component](#url).
@@ -41,7 +41,7 @@ You can use these components to perform their base functions as isolated steps i
For example flows, see the following:
-* [Create a chatbot that can ingest files](/chat-with-files): Learn how to use a **File** component to load a file as context for a chatbot.
+* [Create a chatbot that can ingest files](/chat-with-files): Learn how to use a **Read File** component to load a file as context for a chatbot.
The file and user input are both passed to the LLM so you can ask questions about the file you uploaded.
* [Create a vector RAG chatbot](/chat-with-rag): Learn how to ingest files for use in Retrieval-Augmented Generation (RAG), and then set up a chatbot that can use the ingested files as context.
@@ -106,12 +106,23 @@ Outputs either a [`Data`](/data-types#data) or [`DataFrame`](/data-types#datafra
| silent_errors | BoolInput | Input parameter. If `true`, errors don't raise an exception. |
| use_multithreading | BoolInput | Input parameter. If `true`, multithreading is used. |
-## File
+## Mock Data
-The **File** component loads and parses files, converts the content into a `Data`, `DataFrame`, or `Message` object.
+The **Mock Data** component generates sample data for testing and development.
+You can select these output types:
+
+* `message_output`: A [Message (text)](/data-types#message) output with Lorem Ipsum sample text.
+* `data_output`: A [Data (JSON)](/data-types#data) object containing a JSON structure with one sample record under `records` and a `summary` section.
+* `dataframe_output`: A [DataFrame (tabular)](/data-types#dataframe) with 50 mock records, including columns such as `customer_id`, `first_name`, and `last_name`.
+
+## Read File {#file}
+
+In Langflow version 1.7.0, this component was renamed from **File** to **Read File**.
+
+The **Read File** component loads and parses files, converts the content into a `Data`, `DataFrame`, or `Message` object.
It supports multiple file types, provides parameters for parallel processing and error handling, and supports advanced parsing with the Docling library.
-You can add files to the **File** component in the visual editor or at runtime, and you can upload multiple files at once.
+You can add files to the **Read File** component in the visual editor or at runtime, and you can upload multiple files at once.
For more information about uploading files and working with files in flows, see [File management](/concepts-file-management) and [Create a chatbot that can ingest files](/chat-with-files).
### File type and size limits
@@ -122,7 +133,7 @@ To modify this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment va
Supported file types
-The following file types are supported by the **File** component.
+The following file types are supported by the **Read File** component.
Use archive and compressed formats to bundle multiple files together, or use the [**Directory** component](#directory) to load all files in a directory.
- `.bz2`
@@ -168,20 +179,20 @@ For videos, see the **Twelve Labs** and **YouTube**
-If you run the **File** component with no file selected, it throws an error, or, if **Silent Errors** is enabled, produces no output.
+If you run the **Read File** component with no file selected, it throws an error, or, if **Silent Errors** is enabled, produces no output.
@@ -596,7 +607,7 @@ The following Data components are in legacy status:
* **Load CSV**
* **Load JSON**
-Replace these components with the **File** component, which supports loading CSV and JSON files, as well as many other file types.
+Replace these components with the **Read File** component, which supports loading CSV and JSON files, as well as many other file types.
## See also
diff --git a/docs/docs/Components/components-embedding-models.mdx b/docs/docs/Components/components-embedding-models.mdx
index 844f34995061..fd1c2652f8f2 100644
--- a/docs/docs/Components/components-embedding-models.mdx
+++ b/docs/docs/Components/components-embedding-models.mdx
@@ -19,7 +19,7 @@ This flow loads a text file, splits the text into chunks, generates embeddings f

-1. Create a flow, add a **File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
+1. Create a flow, add a **Read File** component, and then select a file containing text data, such as a PDF, that you can use to test the flow.
2. Add the **Embedding Model** core component, and then provide a valid OpenAI API key.
You can enter the API key directly or use a [global variable](/configuration-global-variables).
@@ -38,7 +38,7 @@ This component stores the generated embeddings so they can be used for similarit
5. Connect the components:
- * Connect the **File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
+ * Connect the **Read File** component's **Loaded Files** output to the **Split Text** component's **Data or DataFrame** input.
* Connect the **Split Text** component's **Chunks** output to the vector store component's **Ingest Data** input.
* Connect the **Embedding Model** component's **Embeddings** output to the vector store component's **Embedding** input.
diff --git a/docs/docs/Components/components-logic.mdx b/docs/docs/Components/components-logic.mdx
index ce5f9f1da2f9..49ba83e18929 100644
--- a/docs/docs/Components/components-logic.mdx
+++ b/docs/docs/Components/components-logic.mdx
@@ -192,6 +192,30 @@ When you select a flow for the **Run Flow** component, it uses the target flow's
| dynamic inputs | Various | Input parameter. Additional inputs are generated based on the selected flow. |
| run_outputs | A `List` of types (`Data`, `Message`, or `DataFrame`) | Output parameter. All outputs are generated from running the flow. |
+## Smart Router {#smart-router}
+
+The **Smart Router** component is an LLM-powered variation of the [**If-Else** component](#if-else).
+Instead of string matching, the **Smart Router** uses a connected [**Language Model** component](/components-models) to categorize and route incoming messages.
+
+You can use the **Smart Router** component anywhere you would use the **If-Else** component.
+For an example, create [If-Else component example flow](#use-the-if-else-component-in-a-flow), then replace the **If-Else** component with a **Smart Router** component.
+Instead of a regex, use the **Routes** table to define the outputs for your messages.
+Finally, connect a **Language Model** component to provide the sorting intelligence.
+
+### Smart Router parameters
+
+
+
+| Name | Type | Description |
+|---------------------|----------|-------------------------------------------------------------------|
+| llm | [LanguageModel](/data-types#languagemodel | Input parameter. The language model to use for categorization. Required. |
+| input_text | String | Input parameter. The primary text input for categorization. Required. |
+| routes | Table | Input parameter. Table defining categories and optional output values. Each row should have a route/category name and optionally a custom output value. Required. |
+| message | Message | Input parameter. Optional override message that replaces both the Input and Output Value for all routes when filled. Advanced. |
+| enable_else_output | Boolean | Input parameter. Include an Else output for cases that don't match any route. Default: false. |
+| custom_prompt | String | Input parameter. Additional instructions for LLM-based categorization. Use `{input_text}` for the input text and `{routes}` for the available categories. |
+| default_result | Message | Output parameter. The Else output. Only available when `enable_else_output` is `true`. Otherwise, output is produced and routed according to the `routes` parameter. |
+
## Legacy Logic components
import PartialLegacy from '@site/docs/_partial-legacy.mdx';
diff --git a/docs/docs/Components/components-models.mdx b/docs/docs/Components/components-models.mdx
index 68bd501c4cd0..4107dd597ab2 100644
--- a/docs/docs/Components/components-models.mdx
+++ b/docs/docs/Components/components-models.mdx
@@ -95,7 +95,7 @@ For example, if you are using the **Language Model** core component, you could t
Some components use a language model component to perform LLM-driven actions.
Typically, these components prepare data for further processing by downstream components, rather than emitting direct chat output.
-For an example, see the [**Smart Function** component](/components-processing#smart-transform).
+For an example, see the [**Smart Transform** component](/components-processing#smart-transform).
A component must accept a `LanguageModel` input to use a language model component as a driver, and you must set the language model component's output type to `LanguageModel`.
For more information, see [Language Model output types](#language-model-output-types).
@@ -155,10 +155,10 @@ Language model components, including the core component and bundled components,
* **Model Response**: The default output type emits the model's generated response as [`Message` data](/data-types#message).
Use this output type when you want the typical LLM interaction where the LLM produces a text response based on given input.
-* **Language Model**: Change the language model component's output type to [`LanguageModel`](/data-types#languagemodel) when you need to attach an LLM to another component in your flow, such as an **Agent** or **Smart Function** component.
+* **Language Model**: Change the language model component's output type to [`LanguageModel`](/data-types#languagemodel) when you need to attach an LLM to another component in your flow, such as an **Agent** or **Smart Transform** component.
With this configuration, the language model component supports an action completed by another component, rather than a direct chat interaction.
- For an example, the **Smart Function** component uses an LLM to create a function from natural language input.
+ For an example, the **Smart Transform** component uses an LLM to create a function from natural language input.
## Additional language models
diff --git a/docs/docs/Components/components-processing.mdx b/docs/docs/Components/components-processing.mdx
index 9b2a0330e337..f3a9cf53dc9c 100644
--- a/docs/docs/Components/components-processing.mdx
+++ b/docs/docs/Components/components-processing.mdx
@@ -14,8 +14,9 @@ They have many uses, including:
* Feed instructions and context to your LLMs and agents with the [**Prompt Template** component](#prompt-template).
* Extract content from larger chunks of data with a [**Parser** component](#parser).
-* Filter data with natural language with the [**Smart Function** component](#smart-transform).
-* Save data to your local machine with the [**Save File** component](#save-file).
+* Filter data with natural language with the [**Smart Transform** component](#smart-transform).
+* Perform advanced JSON queries with the [**Data Operations** component](#data-operations) using `jq` expressions.
+* Save data to your local machine with the [**Write File** component](#save-file).
* Transform data into a different data type with the [**Type Convert** component](#type-convert) to pass it between incompatible components.
## Prompt Template
@@ -42,7 +43,7 @@ This is demonstrated in the following example.
1. Connect any language model component to a **Batch Run** component's **Language model** port.
2. Connect `DataFrame` output from another component to the **Batch Run** component's **DataFrame** input.
-For example, you could connect a **File** component with a CSV file.
+For example, you could connect a **Read File** component with a CSV file.
3. In the **Batch Run** component's **Column Name** field, enter the name of the column in the incoming `DataFrame` that contains the text to process.
For example, if you want to extract text from a `name` column in a CSV file, enter `name` in the **Column Name** field.
@@ -100,7 +101,7 @@ For this example, select the **Select Keys** operation.
:::tip
You can select only one operation.
If you need to perform multiple operations on the data, you can chain multiple **Data Operations** components together to execute each operation in sequence.
- For more complex multi-step operations, consider using a component like the **Smart Function** component.
+ For more complex multi-step operations, consider using a component like the **Smart Transform** component.
:::
3. Under **Select Keys**, add keys for `name`, `username`, and `email`.
@@ -165,6 +166,9 @@ Many parameters are conditional based on the selected **Operation** (`operation`
| append_update_data | Append or Update | Input parameter. The data to append or update the existing data with. |
| remove_keys_input | Remove Keys | Input parameter. A list of keys to remove from the data. |
| rename_keys_input | Rename Keys | Input parameter. A list of keys to rename in the data. |
+| mapped_json_display | JSON to Map | Input parameter. JSON structure to explore for path selection. Only applies to the **Path Selection** operation. For more information, see [Path Selection operation examples](#path-selection-operation-examples). |
+| selected_key | Select Path | Input parameter. The JSON path expression to extract values. Only applies to the **Path Selection** operation. For more information, see [Path Selection operation examples](#path-selection-operation-examples). |
+| query | JQ Expression | Input parameter. The [`jq`](https://jqlang.org/manual/) expression for advanced JSON filtering and transformation. Only applies to the **JQ Expression** operation. For more information, see [JQ Expression operation examples](#jq-expression-operation-examples). |
#### Available data operations
@@ -180,6 +184,54 @@ All operations act on an incoming `Data` object.
| Append or Update | `append_update_data` | Adds or updates key-value pairs. |
| Remove Keys | `remove_keys_input` | Removes specified keys from the data. |
| Rename Keys | `rename_keys_input` | Renames keys in the data. |
+| Path Selection | `mapped_json_display`, `selected_key` | Extracts values from nested JSON structures using path expressions. |
+| JQ Expression | `query` | Performs advanced JSON queries using [`jq`](https://jqlang.org/manual/) syntax for filtering, projections, and transformations. |
+
+### Path Selection operation examples
+
+Use the Path Selection operation to extract values from nested JSON structures with dot notation paths.
+
+1. In the **Operations** dropdown, select **Path Selection**.
+2. In the **JSON to Map** field, enter your JSON structure.
+
+ This example uses the following JSON structure.
+ ```json
+ {
+ "user": {
+ "profile": {
+ "name": "John Doe",
+ "email": "john@example.com"
+ },
+ "settings": {
+ "theme": "dark"
+ }
+ }
+ }
+ ```
+ The **Select Path** dropdown auto-populates with available paths.
+3. In the **Select Paths** dropdown, select the path.
+ You can select paths such as `.user.profile.name` to extract "John Doe", or select `.user.settings.theme` to extract "dark".
+
+### JQ Expression operation example
+
+Use the **JQ Expressions** operation to use the [jq](https://jqlang.org/) query language to perform more advanced JSON filtering.
+1. In the **Operations** dropdown, select **JQ Expression**.
+2. In the **JQ Expression** field, enter a `jq` filter to query against the **Data Operations** component's Data input.
+
+ For this example JSON structure, enter expressions like `.user.profile.name` to extract "John Doe", `.user.profile | {name, email}` to project fields to a new object, or `.user.profile | tostring` to convert the field to a string.
+ ```json
+ {
+ "user": {
+ "profile": {
+ "name": "John Doe",
+ "email": "john@example.com"
+ },
+ "settings": {
+ "theme": "dark"
+ }
+ }
+ }
+ ```
## DataFrame Operations
@@ -203,7 +255,7 @@ The only requirement is that the preceding component must create `DataFrame` out
The sixth component, **Chat Output**, is optional in this example.
It only serves as a convenient way for you to view the final output in the **Playground**, rather than inspecting the component logs.
- 
+ 
If you want to use this example to test the **DataFrame Operations** component, do the following:
@@ -211,19 +263,19 @@ The only requirement is that the preceding component must create `DataFrame` out
* **API Request**
* **Language Model**
- * **Smart Function**
+ * **Smart Transform**
* **Type Convert**
- 2. Configure the [**Smart Function** component](#smart-transform) and its dependencies:
+ 2. Configure the [**Smart Transform** component](#smart-transform) and its dependencies:
- * **API Request**: Configure the [**API Request** component](/components-data#api-request) to get JSON data from an endpoint of your choice, and then connect the **API Response** output to the **Smart Function** component's **Data** input.
+ * **API Request**: Configure the [**API Request** component](/components-data#api-request) to get JSON data from an endpoint of your choice, and then connect the **API Response** output to the **Smart Transform** component's **Data** input.
* **Language Model**: Select your preferred provider and model, and then enter a valid API key.
- Change the output to **Language Model**, and then connect the `LanguageModel` output to the **Smart Function** component's **Language Model** input.
- * **Smart Function**: In the **Instructions** field, enter natural language instructions to extract data from the API response.
+ Change the output to **Language Model**, and then connect the `LanguageModel` output to the **Smart Transform** component's **Language Model** input.
+ * **Smart Transform**: In the **Instructions** field, enter natural language instructions to extract data from the API response.
Your instructions depend on the response content and desired outcome.
For example, if the response contains a large `result` field, you might provide instructions like `explode the result field out into a Data object`.
- 3. Convert the **Smart Function** component's `Data` output to `DataFrame`:
+ 3. Convert the **Smart Transform** component's `Data` output to `DataFrame`:
1. Connect the **Filtered Data** output to the **Type Convert** component's **Data** input.
2. Set the **Type Convert** component's **Output Type** to **DataFrame**.
@@ -246,13 +298,13 @@ For example, the **Filter** operation filters the rows based on a specified colu
:::tip
You can select only one operation.
If you need to perform multiple operations on the data, you can chain multiple **DataFrame Operations** components together to execute each operation in sequence.
- For more complex multi-step operations, like dramatic schema changes or pivots, consider using an LLM-powered component, like the **Structured Output** or **Smart Function** component, as a replacement or preparation for the **DataFrame Operations** component.
+ For more complex multi-step operations, like dramatic schema changes or pivots, consider using an LLM-powered component, like the **Structured Output** or **Smart Transform** component, as a replacement or preparation for the **DataFrame Operations** component.
:::
- If you're following along with the example flow, select any operation that you want to apply to the data that was extracted by the **Smart Function** component.
+ If you're following along with the example flow, select any operation that you want to apply to the data that was extracted by the **Smart Transform** component.
To view the contents of the incoming `DataFrame`, click **Run component** on the **Type Convert** component, and then **Inspect output**.
If the `DataFrame` seems malformed, click **Inspect output** on each upstream component to determine where the error occurs, and then modify your flow's configuration as needed.
- For example, if the **Smart Function** component didn't extract the expected fields, modify your instructions or verify that the given fields are present in the **API Response** output.
+ For example, if the **Smart Transform** component didn't extract the expected fields, modify your instructions or verify that the given fields are present in the **API Response** output.
4. Configure the operation's parameters.
The specific parameters depend on the selected operation.
@@ -294,7 +346,7 @@ Provide the following parameters:
* **Column Name** (`column_name`): The name of the column to filter on.
* **Filter Value** (`filter_value`): The value to filter on.
-* **Filter Operator** (`filter_operator`): The operator to use for filtering, one of `equals` (default), `not equals`, `contains`, `starts with`, `ends with`, `greater than`, or `less than`.
+* **Filter Operator** (`filter_operator`): The operator to use for filtering, one of `equals` (default), `not equals`, `contains`, `not contains`, `starts with`, `ends with`, `greater than`, or `less than`.
@@ -550,7 +602,7 @@ There are several ways you can address these inconsistencies:
* Rectify the source data directly.
* Use other components to amend or filter anomalies before passing the data to the **Parser** component.
- There are many components you can use for this depending on your goal, such as the **Data Operations**, **Structured Output**, and **Smart Function** components.
+ There are many components you can use for this depending on your goal, such as the **Data Operations**, **Structured Output**, and **Smart Transform** components.
* Enable the **Parser** component's **Clean Data** parameter to skip empty rows or lines.
## Python Interpreter
@@ -633,16 +685,18 @@ If you don't include the package imports in the chat, the agent can still create
| python_code | Code | Input parameter. The Python code to execute. Only modules specified in Global Imports can be used. |
| results | Data | Output parameter. The output of the executed Python code, including any printed results or errors. |
-## Save File
+## Write File {#save-file}
-The **Save File** component creates a file containing data produced by another component.
-Several file formats are supported, and you can store files in [Langflow storage](/memory) or the local file system.
+In Langflow version 1.7.0, this component was renamed from **Save File** to **Write File**.
-To configure the **Save File** component and use it in a flow, do the following:
+The **Write File** component creates a file containing data produced by another component.
+Several file formats are supported, and you can store files in [Langflow storage](/memory), AWS S3, Google Drive, or the local file system.
-1. Connect [`DataFrame`](/data-types#dataframe), [`Data`](/data-types#data), or [`Message`](/data-types#message) output from another component to the **Save File** component's **Input** port.
+To configure the **Write File** component and use it in a flow, do the following:
- You can connect the same output to multiple **Save File** components if you want to create multiple files, save the data in different file formats, or save files to multiple locations.
+1. Connect [`DataFrame`](/data-types#dataframe), [`Data`](/data-types#data), or [`Message`](/data-types#message) output from another component to the **Write File** component's **Input** port.
+
+ You can connect the same output to multiple **Write File** components if you want to create multiple files, save the data in different file formats, or save files to multiple locations.
2. In **File Name**, enter a file name and an optional path.
@@ -672,12 +726,12 @@ To configure the **Save File** component and use it in a flow, do the following:
* `Message` can be saved to TXT, JSON (default), or Markdown.
:::warning Overwrites allowed
- If you have multiple **Save File** components, in one or more flows, with the same file name, path, and extension, the file contains the data from the most recent run only.
+ If you have multiple **Write File** components, in one or more flows, with the same file name, path, and extension, the file contains the data from the most recent run only.
Langflow doesn't block overwrites if a matching file already exists.
To avoid unintended overwrites, use unique file names and paths.
:::
-4. To test the **Save File** component, click **Run component**, and then click **Inspect output** to get the filepath where the file was saved.
+4. To test the **Write File** component, click **Run component**, and then click **Inspect output** to get the filepath where the file was saved.
The component's literal output is a `Message` containing the original data type, the file name and extension, and the absolute filepath to the file based on the **File Name** parameter.
For example:
@@ -693,14 +747,14 @@ To configure the **Save File** component and use it in a flow, do the following:
DataFrame saved successfully as '/Users/user.name/Desktop/my_file.csv' at /Users/user.name/Desktop/my_file.csv
```
-
5. Optional: If you want to use the saved file in a flow, you must use an API call or another component to retrieve the file from the given filepath.
-## Smart Function {#smart-transform}
+## Smart Transform {#smart-transform}
-In Langflow version 1.5, this component was renamed from **Lambda Filter** to **Smart Function**.
+This component has been renamed multiple times.
+Its previous names include **Lambda Filter** and **Smart Function**.
-The **Smart Function** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
+The **Smart Transform** component uses an LLM to generate a Lambda function to filter or transform structured data based on natural language instructions.
You must connect this component to a [language model component](/components-models), which is used to generate a function based on the natural language instructions you provide in the **Instructions** parameter.
The LLM runs the function against the data input, and then outputs the results as [`Data`](/data-types#data).
@@ -711,13 +765,13 @@ One sentence or less is preferred because end punctuation, like periods, can cau
If you need to provide more details instructions that aren't directly relevant to the Lambda function, you can input them in the **Language Model** component's **Input** field or through a **Prompt Template** component.
:::
-The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Function** component.
-Then, the **Smart Function** component passes the data and the instruction `extract emails` to the attached **Language Model** component.
+The following example uses the **API Request** endpoint to pass JSON data from the `https://jsonplaceholder.typicode.com/users` endpoint to the **Smart Transform** component.
+Then, the **Smart Transform** component passes the data and the instruction `extract emails` to the attached **Language Model** component.
From there, the LLM generates a filter function that extracts email addresses from the JSON data, returning the filtered data as chat output.
-
+
-### Smart Function parameters
+### Smart Transform parameters
@@ -745,7 +799,7 @@ The **DataFrame** output returns the list of chunks as a structured [`DataFrame`
The **Split Text** component's parameters control how the text is split into chunks, specifically the `chunk_size`, `chunk_overlap`, and `separator` parameters.
-To test the chunking behavior, add a **Text Input** or **File** component with some sample data to chunk, click **Run component** on the **Split Text** component, and then click **Inspect output** to view the list of chunks and their metadata. The **text** column contains the actual text chunks created from your chunking settings.
+To test the chunking behavior, add a **Text Input** or **Read File** component with some sample data to chunk, click **Run component** on the **Split Text** component, and then click **Inspect output** to view the list of chunks and their metadata. The **text** column contains the actual text chunks created from your chunking settings.
If the chunks aren't split as you expect, adjust the parameters, rerun the component, and then inspect the new output.
@@ -780,7 +834,7 @@ For example, you can extract specific details from documents, like email message
To use the **Structured Output** component in a flow, do the following:
1. Provide an **Input Message**, which is the source material from which you want to extract structured data.
-This can come from practically any component, but it is typically a **Chat Input**, **File**, or other component that provides some unstructured or semi-structured input.
+This can come from practically any component, but it is typically a **Chat Input**, **Read File**, or other component that provides some unstructured or semi-structured input.
:::tip
Not all source material has to become structured output.
diff --git a/docs/docs/Components/components-prompts.mdx b/docs/docs/Components/components-prompts.mdx
index e8b67b034100..faa76001b7f6 100644
--- a/docs/docs/Components/components-prompts.mdx
+++ b/docs/docs/Components/components-prompts.mdx
@@ -68,7 +68,7 @@ The following steps demonstrate how to add variables to a **Prompt Template** co
* Enter fixed values directly into the fields.
You can add as many variables as you like in your template.
-For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **File** components.
+For example, you could add variables for `{references}` and `{instructions}`, and then feed that information in from other components, such as **Text Input**, **URL**, or **Read File** components.
## See also
diff --git a/docs/docs/Components/concepts-components.mdx b/docs/docs/Components/concepts-components.mdx
index 1509cfc51c45..fcc39a693717 100644
--- a/docs/docs/Components/concepts-components.mdx
+++ b/docs/docs/Components/concepts-components.mdx
@@ -155,7 +155,7 @@ In the context of creating and running flows, component code does the following:
* Passes results to the next component in the flow.
All components inherit from a base `Component` class that defines the component's interface and behavior.
-For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/base/textsplitters/model.py) class.
+For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the [`LCTextSplitterComponent`](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/base/textsplitters/model.py) class.
Each component's code includes definitions for inputs and outputs, which are represented in the workspace as [component ports](#component-ports).
For example, the `RecursiveCharacterTextSplitter` has four inputs. Each input definition specifies the input type, such as `IntInput`, as well as the encoded name, display name, description, and other parameters for that specific input.
diff --git a/docs/docs/Contributing/contributing-bundles.mdx b/docs/docs/Contributing/contributing-bundles.mdx
index 2b0f9bea1c99..183389136861 100644
--- a/docs/docs/Contributing/contributing-bundles.mdx
+++ b/docs/docs/Contributing/contributing-bundles.mdx
@@ -1,5 +1,5 @@
---
-title: Contribute bundles
+title: Contribute component bundles
slug: /contributing-bundles
---
@@ -11,24 +11,26 @@ If you want to contribute your custom components back to the Langflow project, y
Follow these steps to add components to **Bundles** in the Langflow visual editor.
This example adds a bundle named `DarthVader`.
-## Add the bundle to the backend folder
+For more information on creating custom components, see [Create custom Python components](/components-custom-components).
-1. Navigate to the backend directory in the Langflow repository and create a new folder for your bundle.
-The path for your new component is `src > backend > base > langflow > components > darth_vader`.
-You can view the [components folder](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/components) in the Langflow repository.
+## Add the bundle to the lfx components folder
+
+1. Navigate to the lfx directory in the Langflow repository and create a new folder for your bundle.
+The path for your new component is `src/lfx/src/lfx/components/darth_vader`.
+You can view the [components folder](https://github.com/langflow-ai/langflow/tree/main/src/lfx/src/lfx/components) in the Langflow repository.
2. Within the newly created `darth_vader` folder, add the following files:
-* `darth_vader_component.py` — This file contains the backend logic for the new bundle. Create multiple `.py` files for multiple components.
-* `__init__.py` — This file initializes the bundle components. You can use any existing `__init__.py` as an example to see how it should be structured.
+ * `darth_vader_component.py` — This file contains the backend logic for the new bundle. Create multiple `.py` files for multiple components.
+ * `__init__.py` — This file initializes the bundle components. You can use any existing `__init__.py` as an example to see how it should be structured.
-For an example of adding multiple components in a bundle, see the [Notion](https://github.com/langflow-ai/langflow/tree/main/src/backend/base/langflow/components/Notion) bundle.
+ For an example of adding multiple components in a bundle, see the [Notion](https://github.com/langflow-ai/langflow/tree/main/src/lfx/src/lfx/components/Notion) bundle.
## Add the bundle to the frontend folder
1. Navigate to the frontend directory in the Langflow repository to add your bundle's icon.
-The path for your new component icon is `src > frontend > src > icons > DarthVader`
+The path for your new component icon is `src/frontend/src/icons/DarthVader`
You can view the [icons folder](https://github.com/langflow-ai/langflow/tree/main/src/frontend/src/icons) in the Langflow repository.
To add your icon, create **three** files inside the `icons/darth_vader` folder.
@@ -105,12 +107,12 @@ For example:
import("@/icons/DeepSeek").then((mod) => ({ default: mod.DeepSeekIcon })),
```
-8. To add your bundle to the **Bundles** menu, edit the [`SIDEBAR_BUNDLES` array](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/utils/styleUtils.ts#L231) in `/src/frontend/src/utils/styleUtils.ts`.
+8. To add your bundle to the **Bundles** menu, edit the [`SIDEBAR_BUNDLES` array](https://github.com/langflow-ai/langflow/blob/main/src/frontend/src/utils/styleUtils.ts#L243) in `/src/frontend/src/utils/styleUtils.ts`.
Add an object to the array with the following keys:
* `display_name`: The text label shown in the Langflow visual editor
- * `name`: The name of the folder you created within the `/src/backend/base/langflow/components` directory
+ * `name`: The name of the folder you created within the `/src/lfx/src/lfx/components` directory
* `icon`: The name of the bundle's icon that you defined in the previous steps
For example:
@@ -126,7 +128,7 @@ For example:
In your component bundle, associate the icon variable with your new bundle.
In your `darth_vader_component.py` file, in the component class, include the icon that you defined in the frontend.
-The `icon` must point to the directory you created for your icons within the `src > frontend > src > icons` directory.
+The `icon` must point to the directory you created for your icons within the `src/frontend/src/icons` directory.
For example:
```
class DarthVaderAPIComponent(LCToolComponent):
diff --git a/docs/docs/Contributing/contributing-component-tests.mdx b/docs/docs/Contributing/contributing-component-tests.mdx
index b237d6d0edf0..6a9e2b0db02a 100644
--- a/docs/docs/Contributing/contributing-component-tests.mdx
+++ b/docs/docs/Contributing/contributing-component-tests.mdx
@@ -9,17 +9,17 @@ This guide outlines how to structure and implement tests for application compone
* The test file should follow the same directory structure as the component being tested, but should be placed in the corresponding unit tests folder.
- For example, if the file path for the component is `src/backend/base/langflow/components/prompts/`, then the test file should be located at `src/backend/tests/unit/components/prompts`.
+ For example, if the file path for the component is `src/lfx/src/lfx/components/data/`, then the test file should be located at `src/backend/tests/unit/components/data`.
* The test file name should use snake case and follow the pattern `test_.py`.
- For example, if the file to be tested is `PromptComponent.py`, then the test file should be named `test_prompt_component.py`.
+ For example, if the file to be tested is `FileComponent.py`, then the test file should be named `test_file_component.py`.
## File structure
* Each test file should group tests into classes by component. There should be no standalone test functions in the file— only test methods within classes.
* Class names should follow the pattern `Test`.
-For example, if the component being tested is `PromptComponent`, then the test class should be named `TestPromptComponent`.
+For example, if the component being tested is `FileComponent`, then the test class should be named `TestFileComponent`.
## Imports, inheritance, and mandatory methods
@@ -39,7 +39,7 @@ These base classes enforce mandatory methods that the component test classes mus
```python
@pytest.fixture
def component_class(self):
- return PromptComponent
+ return FileComponent
```
* `default_kwargs:` Returns a dictionary with the default arguments required to instantiate the component. For example:
@@ -47,7 +47,7 @@ These base classes enforce mandatory methods that the component test classes mus
```python
@pytest.fixture
def default_kwargs(self):
- return {"template": "Hello {name}!", "name": "John", "_session_id": "123"}
+ return {"file_path": "/tmp/test.txt", "_session_id": "123"}
```
* `file_names_mapping:` Returns a list of dictionaries representing the relationship between `version`, `module`, and `file_name` that the tested component has had over time. This can be left empty if it is an unreleased component. For example:
@@ -56,11 +56,11 @@ These base classes enforce mandatory methods that the component test classes mus
@pytest.fixture
def file_names_mapping(self):
return [
- {"version": "1.0.15", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.16", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.17", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.18", "module": "prompts", "file_name": "Prompt"},
- {"version": "1.0.19", "module": "prompts", "file_name": "Prompt"},
+ {"version": "1.0.15", "module": "data", "file_name": "File"},
+ {"version": "1.0.16", "module": "data", "file_name": "File"},
+ {"version": "1.0.17", "module": "data", "file_name": "File"},
+ {"version": "1.0.18", "module": "data", "file_name": "File"},
+ {"version": "1.0.19", "module": "data", "file_name": "File"},
]
```
@@ -101,14 +101,13 @@ Once the basic structure of the test file is defined, implement test methods for
After executing the `.to_frontend_node()` method, the resulting data is available for verification in the dictionary `frontend_node["data"]["node"]`. Assertions should be clear and cover the expected outcomes.
```python
- def test_post_code_processing(self, component_class, default_kwargs):
+ def test_file_component_processing(self, component_class, default_kwargs):
component = component_class(**default_kwargs)
frontend_node = component.to_frontend_node()
node_data = frontend_node["data"]["node"]
- assert node_data["template"]["template"]["value"] == "Hello {name}!"
- assert "name" in node_data["custom_fields"]["template"]
- assert "name" in node_data["template"]
- assert node_data["template"]["name"]["value"] == "John"
+ assert node_data["template"]["path"]["file_path"] == "/tmp/test.txt"
+ assert "path" in node_data["template"]
+ assert node_data["display_name"] == "File"
```
\ No newline at end of file
diff --git a/docs/docs/Contributing/contributing-components.mdx b/docs/docs/Contributing/contributing-components.mdx
index 1f9c84ee6960..7929ac7ce714 100644
--- a/docs/docs/Contributing/contributing-components.mdx
+++ b/docs/docs/Contributing/contributing-components.mdx
@@ -3,84 +3,23 @@ title: Contribute components
slug: /contributing-components
---
+import PartialBasicComponentStructure from '../_partial-basic-component-structure.mdx';
-New components are added as objects of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class.
+New components are added as objects of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/custom/custom_component/component.py) class.
-Dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L148) file.
+Dependencies are added to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml) file.
## Contribute an example component to Langflow
Anyone can contribute an example component. For example, to create a new data component called **DataFrame processor**, follow these steps to contribute it to Langflow.
-1. Create a Python file called `dataframe_processor.py`.
-2. Write your processor as an object of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. You'll create a new class, `DataFrameProcessor`, that will inherit from `Component` and override the base class's methods.
+
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-```
-
-3. Define class attributes to provide information about your custom component:
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-
- display_name: str = "DataFrame Processor"
- description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
- documentation: str = "https://docs.langflow.org/components-dataframe-processor"
- icon: str = "DataframeIcon"
- priority: int = 100
- name: str = "dataframe_processor"
-```
-
- * `display_name`: A user-friendly name shown in the visual editor.
- * `description`: A brief description of what your component does.
- * `documentation`: A link to detailed documentation.
- * `icon`: An emoji or icon identifier for visual representation.
- For more information, see [Contributing bundles](/contributing-bundles#add-the-bundle-to-the-frontend-folder).
- * `priority`: An optional integer to control display order. Lower numbers appear first.
- * `name`: An optional internal identifier that defaults to class name.
-
-4. Define the component's interface by specifying its inputs, outputs, and the method that will process them. The method name must match the `method` field in your outputs list, as this is how Langflow knows which method to call to generate each output.
-This example creates a minimal custom component skeleton.
-For more information on creating your custom component, see [Create custom Python components](/components-custom-components).
-```python
-from typing import Any, Dict, Optional
-import pandas as pd
-from langflow.custom import Component
-
-class DataFrameProcessor(Component):
- """A component that processes pandas DataFrames with various operations."""
-
- display_name: str = "DataFrame Processor"
- description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
- documentation: str = "https://docs.langflow.org/components-dataframe-processor"
- icon: str = "DataframeIcon"
- priority: int = 100
- name: str = "dataframe_processor"
-
- # input and output lists
- inputs = []
- outputs = []
-
- # method
- def some_output_method(self):
- return ...
-```
-
-5. Save the `dataframe_processor.py` to the `src > backend > base > langflow > components` directory.
+5. Save the `dataframe_processor.py` to the `src/lfx/src/lfx/components` directory.
This example adds a data component, so add it to the `/data` directory.
-6. Add the component dependency to `src > backend > base > langflow > components > data > __init__.py` as `from .DataFrameProcessor import DataFrameProcessor`.
-You can view the [/data/__init__.py](https://github.com/langflow-ai/langflow/blob/dev/src/backend/base/langflow/components/data/__init__.py) in the Langflow repository.
+6. Add the component dependency to `src/lfx/src/lfx/components/data/__init__.py` as `from .DataFrameProcessor import DataFrameProcessor`.
+You can view the [/data/__init__.py](https://github.com/langflow-ai/langflow/blob/dev/src/lfx/src/lfx/components/data/__init__.py) in the Langflow repository.
7. Add any new dependencies to the [pyproject.toml](https://github.com/langflow-ai/langflow/blob/main/pyproject.toml#L20) file.
diff --git a/docs/docs/Deployment/deployment-docker.mdx b/docs/docs/Deployment/deployment-docker.mdx
index bbd92921bdb0..2ac099baf76a 100644
--- a/docs/docs/Deployment/deployment-docker.mdx
+++ b/docs/docs/Deployment/deployment-docker.mdx
@@ -163,7 +163,7 @@ FROM langflowai/langflow:latest
WORKDIR /app
# Copy your modified memory component
-COPY src/backend/base/langflow/components/helpers/memory.py /tmp/memory.py
+COPY src/lfx/src/lfx/components/helpers/memory.py /tmp/memory.py
# Find the site-packages directory where langflow is installed
RUN python -c "import site; print(site.getsitepackages()[0])" > /tmp/site_packages.txt
@@ -198,7 +198,7 @@ To use this custom Dockerfile, do the following:
In this example, Langflow expects `memory.py` to exist in the `/helpers` directory, so you create a directory in that location.
```bash
- mkdir -p src/backend/base/langflow/components/helpers
+ mkdir -p src/lfx/src/lfx/components/helpers
```
3. Place your modified `memory.py` file in the `/helpers` directory.
diff --git a/docs/docs/Develop/api-keys-and-authentication.mdx b/docs/docs/Develop/api-keys-and-authentication.mdx
index 8329fe757e6b..3558178f8c71 100644
--- a/docs/docs/Develop/api-keys-and-authentication.mdx
+++ b/docs/docs/Develop/api-keys-and-authentication.mdx
@@ -27,6 +27,7 @@ You can use Langflow API keys to interact with Langflow programmatically.
By default, most Langflow API endpoints, such as `/v1/run/$FLOW_ID`, require authentication with a Langflow API key.
+To require API key authentication for flow webhook endpoints, use the [`LANGFLOW_WEBHOOK_AUTH_ENABLE`](/webhook#require-authentication-for-webhooks) environment variable.
To configure authentication for Langflow MCP servers, see [Use Langflow as an MCP server](/mcp-server).
### Langflow API key permissions
diff --git a/docs/docs/Develop/concepts-file-management.mdx b/docs/docs/Develop/concepts-file-management.mdx
index 51fb1dc0d63a..81047b501829 100644
--- a/docs/docs/Develop/concepts-file-management.mdx
+++ b/docs/docs/Develop/concepts-file-management.mdx
@@ -46,28 +46,28 @@ To modify this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment va
## Use files in a flow
-To use files in your Langflow file management system in a flow, add a component that accepts file input to your flow, such as the **File** component.
+To use files in your Langflow file management system in a flow, add a component that accepts file input to your flow, such as the **Read File** component.
-For example, add a **File** component to your flow, click **Select files**, and then select files from the **My Files** list.
+For example, add a **Read File** component to your flow, click **Select files**, and then select files from the **My Files** list.
-This list includes all files in your server's file management system, but you can only select [file types that are supported by the **File** component](/components-data#file).
+This list includes all files in your server's file management system, but you can only select [file types that are supported by the **Read File** component](/components-data#file).
If you need another file type, you must use a different component that supports that file type, or you need to convert it to a supported type before uploading it.
-For more information about the **File** component and other data loading components, see [Data components](/components-data).
+For more information about the **Read File** component and other data loading components, see [Data components](/components-data).
### Load files at runtime
You can use preloaded files in your flows, and you can load files at runtime, if your flow accepts file input.
To enable file input in your flow, do the following:
-1. Add a [**File** component](/components-data#file) to your flow.
+1. Add a [**Read File** component](/components-data#file) to your flow.
2. Click **Share**, select **API access**, and then click **Input Schema** to add [`tweaks`](/concepts-publish#input-schema) to the request payload in the flow's automatically generated code snippets.
3. Expand the **File** section, find the **Files** row, and then enable **Expose Input** to allow the parameter to be set at runtime through the Langflow API.
4. Close the **Input Schema** pane to return to the **API access** pane.
-The payload in each code snippet now includes `tweaks` with your **File** component's ID and the `path` key that you enabled in **Input Schema**:
+The payload in each code snippet now includes `tweaks` with your **Read File** component's ID and the `path` key that you enabled in **Input Schema**:
```json
"tweaks": {
diff --git a/docs/docs/Develop/data-types.mdx b/docs/docs/Develop/data-types.mdx
index 640ac14786e8..7ae03e998f2c 100644
--- a/docs/docs/Develop/data-types.mdx
+++ b/docs/docs/Develop/data-types.mdx
@@ -140,7 +140,7 @@ For information about the underlying Python classes that produce `Embeddings`, s
The `LanguageModel` type is a specific data type that can be produced by language model components and accepted by components that use an LLM.
When you change a language model component's output type from **Model Response** to **Language Model**, the component's output port changes from a **Message** port to a **Language Model** port .
-Then, you connect the outgoing **Language Model** port to a **Language Model** input port on a compatible component, such as a **Smart Function** component.
+Then, you connect the outgoing **Language Model** port to a **Language Model** input port on a compatible component, such as a **Smart Transform** component.
For more information about using these components in flows and toggling `LanguageModel` output, see [Language model components](/components-models#language-model-output-types).
diff --git a/docs/docs/Flows/lfx.mdx b/docs/docs/Flows/lfx.mdx
new file mode 100644
index 000000000000..a91ab63295af
--- /dev/null
+++ b/docs/docs/Flows/lfx.mdx
@@ -0,0 +1,376 @@
+---
+title: Run flows with Langflow Executor (LFX)
+slug: /lfx-stateless-flows
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+The Langflow Executor (LFX) is a command-line tool that serves and runs flows statelessly from [flow JSON files](/concepts-flows-import) with minimal dependencies.
+
+Flows are run without the flow builder UI or database, and any flow dependencies are automatically added to complete the run.
+The flow graph is stored in memory at all times, so there is less overhead for loading the graph from a database.
+Running a flow with LFX is similar to running flows with the [`--backend-only` environment variable](/environment-variables#server) enabled, but even more lightweight, because the Langflow package and all of its dependencies don't need to be installed.
+
+Use LFX to share flows with other developers, test flows in different environments, and run flows in production applications without requiring the full Langflow UI or database setup.
+
+LFX includes three commands for working with flows:
+
+* [`lfx serve`](#serve): This command starts a FastAPI server hosting a Langflow API endpoint with your flow available at `/flows/{flow_id}/run`.
+* [`lfx run`](#run): This command executes a flow locally and returns the results to `stdout`.
+* [`lfx check`](#check): This command checks if flows have outdated components and updates them (similar to the version check in the UI).
+
+## Prerequisites
+
+- Install [Python](https://www.python.org/downloads/release/python-3100/)
+- Install [uv](https://docs.astral.sh/uv/getting-started/installation/)
+- Create or download a [flow JSON file](/concepts-flows)
+- Create an [OpenAI API key](https://platform.openai.com/api-keys)
+- Create a [Langflow API key](/api-keys-and-authentication)
+
+## Install LFX
+
+LFX can be installed in multiple ways.
+
+
+
+
+1. Clone the Langflow repository:
+ ```bash
+ git clone https://github.com/langflow-ai/langflow
+ ```
+
+2. Change directory to `langflow/src/lfx`:
+ ```bash
+ cd langflow/src/lfx
+ ```
+
+3. Run LFX commands using `uv run`:
+ ```bash
+ uv run lfx serve simple-agent-flow.json
+ ```
+
+
+
+
+1. Create and activate a virtual environment.
+
+ ```bash
+ uv venv lfx-venv
+ source lfx-venv/bin/activate
+ ```
+
+2. Install the LFX package from PyPI:
+
+ ```bash
+ uv pip install lfx
+ ```
+
+3. Run LFX commands using `uv run`:
+
+ ```bash
+ uv run lfx serve simple-agent-flow.json
+ ```
+
+
+
+
+Run LFX without installing it using `uvx`:
+
+```bash
+uvx lfx serve simple-agent-flow.json
+```
+
+This command downloads and runs LFX in a temporary environment without permanent installation.
+
+
+
+
+## Serve the simple agent starter flow with `lfx serve` {#serve}
+
+To serve a flow as a REST API endpoint, set a `LANGFLOW_API_KEY` and run the flow JSON.
+The API key is required for security because `lfx serve` can create a publicly accessible FastAPI server.
+To create a Langflow API key, see [API keys and authentication](/api-keys-and-authentication).
+
+This example uses the **Agent** component's built-in OpenAI model, which requires an OpenAI API key.
+If you want to use a different provider, edit the model provider, model name, and credentials accordingly.
+
+1. Set up your environment variables.
+
+
+
+
+ Create a `.env` file and populate it with your flow's variables.
+ The `LANGFLOW_API_KEY` is required.
+ This example assumes the flow requires an OpenAI API key.
+
+ ```bash
+ LANGFLOW_API_KEY="sk..."
+ OPENAI_API_KEY="sk-..."
+ ```
+
+
+
+
+ Export your variables in the same terminal session where you'll start the server.
+ You must declare your variables before the server starts for the server to pick them up.
+
+ ```bash
+ export LANGFLOW_API_KEY="sk..."
+ export OPENAI_API_KEY="sk-..."
+ ```
+
+
+
+
+2. Start the server with your variable values.
+
+
+
+
+ This example assumes your flow file and `.env` file are in the current directory:
+
+ ```
+ uv run lfx serve simple-agent-flow.json --env-file .env
+ ```
+
+ If your `.env` file is in a different location, provide the full or relative path:
+
+ ```
+ uv run lfx serve simple-agent-flow.json --env-file /path/to/.env
+ ```
+
+
+
+
+ If you exported your variables, the command to start the server automatically picks up the values when it starts.
+
+ ```
+ uv run lfx serve simple-agent-flow.json
+ ```
+
+ To export new values, stop the server, export the variables, and start the server again.
+
+
+
+
+
+
+3. The startup process displays a `flow_id` value in the output.
+ Copy the `flow_id` to use in the test API call in the next step.
+ In this example, the `flow_id` is `c1dab29d-3364-58ef-8fef-99311d32ee42`.
+
+ ```bash
+ ╭───────────────────────────── LFX Server ─────────────────────────────╮
+ │ 🎯 Single Flow Served Successfully! │
+ │ │
+ │ Source: /Users/mendonkissling/Downloads/simple-agent-flow.json │
+ │ Server: http://127.0.0.1:8000 │
+ │ API Key: sk-... │
+ │ │
+ │ Send POST requests to: │
+ │ http://127.0.0.1:8000/flows/c1dab29d-3364-58ef-8fef-99311d32ee42/run │
+ │ │
+ │ With headers: │
+ │ x-api-key: sk-... │
+ │ │
+ │ Or query parameter: │
+ │ ?x-api-key=sk-... │
+ │ │
+ │ Request body: │
+ │ {'input_value': 'Your input message'} │
+ ╰──────────────────────────────────────────────────────────────────────╯
+ ```
+
+4. In a new terminal, export your `flow_id` and Langflow API key values as variables.
+ ```bash
+ export LANGFLOW_API_KEY="sk..."
+ export FLOW_ID="c1dab29d-3364-58ef-8fef-99311d32ee42"
+ ```
+
+5. Test the server with an API call to the `/flows/flow_id/run` endpoint.
+
+ ```bash
+ curl -X POST http://localhost:8000/flows/$FLOW_ID/run \
+ -H "Content-Type: application/json" \
+ -H "x-api-key: $LANGFLOW_API_KEY" \
+ -d '{"input_value": "Hello, world!"}'
+ ```
+
+ Successful response:
+ ```json
+ {
+ "result": "Hello world! 👋\n\nHow can I help you today? If you have any questions or need assistance, just let me know!",
+ "success": true,
+ "logs": "\n\n\u001b[1m> Entering new None chain...\u001b[0m\n\u001b[32;1m\u001b[1;3mHello world! 👋\n\nHow can I help you today? If you have any questions or need assistance, just let me know!\u001b[0m\n\n\u001b[1m> Finished chain.\u001b[0m\n",
+ "type": "message",
+ "component": "Chat Output"
+ }
+ ```
+
+Your flow is now running as a lightweight API endpoint, with only the flow's required dependencies and no visual builder installed.
+Users who call your endpoint don't need to install Langflow or configure their own LLM provider keys.
+
+To make your server publicly accessible, use a [tunneling service like ngrok](/deployment-public-server), or deploy to a public cloud provider such as [DigitalOcean](/deployment-nginx-ssl).
+
+### LFX serve options
+
+| Option | Description |
+|-----------------------------------------|-----------------------------------------------------------------------------------------------|
+| `--check-variables`/`--no-check-variables` | Check global variables for environment variables. |
+| `--env-file` | The path to the `.env` file. |
+| `--host`, `-h` | Host to bind server. Default: `127.0.0.1` (localhost only). Use `0.0.0.0` to make it publicly accessible from other machines. |
+| `--log-level` | Set logging level. Options are `debug`, `info`, `warning`, `error`, or `critical`. |
+| `--port`, `-p` | Port to bind server. Default:`8000`. |
+| `--verbose`, `-v` | Display diagnostic output. |
+
+## Run the simple agent flow with `lfx run` {#run}
+
+The `lfx run` command runs a flow from a JSON file without serving it, and the output is sent to `stdout`.
+Input to `lfx run` can be a path to the JSON file, inline JSON passed with `--input-value`, or read from `stdin`.
+No Langflow API key is required.
+
+This example uses the **Agent** component's built-in OpenAI model, which requires an OpenAI API key.
+If you want to use a different provider, edit the model provider, model name, and credentials accordingly.
+
+1. Export your variables in the same terminal session where you'll run the flow.
+ ```bash
+ export OPENAI_API_KEY="sk-..."
+ ```
+
+2. Run the flow from a flow JSON file.
+ ```bash
+ uv run lfx run simple-agent-flow.json "Hello world"
+ ```
+
+ This flow expects a [Message](/data-types#message) input, which is a simple text string. The simple agent flow includes Calculator and URL tools, it can answer questions such as `"What is 15 multiplied by 23?"` or `"Can you fetch information from https://example.com?"`.
+
+ If your flow expects multiple structured input fields, you can pass structured JSON with the `--input-value` flag. The field names must match what your flow expects:
+ ```bash
+ uv run lfx run structured-input-flow.json \
+ --input-value '{"question": "What is the weather in Paris?", "context": "weather"}'
+ ```
+
+In addition to running flows from JSON files, `lfx run` supports other input methods, which are described in the sections below.
+
+### Run flows from stdin
+
+The `--stdin` option allows you to run flows that come from dynamic sources such as APIs or databases, or when you want to modify a flow before execution.
+The command reads the flow's JSON definition from `stdin`, validates the JSON structure, and runs the flow.
+
+This example reads a flow JSON from stdin.
+Provide the input value to the flow with the `--input-value` flag.
+```bash
+cat simple-agent-flow.json | uv run lfx run --stdin \
+ --input-value "Hello world" \
+ --format json | jq '.result'
+```
+
+This example fetches a flow JSON from a remote API endpoint and runs it:
+```bash
+curl https://api.example.com/flows/my-agent-flow | uv run lfx run --stdin \
+ --input-value "Hello world"
+```
+
+Running a flow with `stdin` allows you to modify flows created in the visual builder before execution.
+This example demonstrates changing the OpenAI model to `gpt-4o` before running the flow:
+```bash
+cat simple-agent-flow.json | jq '(.data.nodes[] | select(.data.node.template.model_name.value) | .data.node.template.model_name.value) = "gpt-4o"' | \
+ uv run lfx run --stdin \
+ --input-value "Hello world" \
+ --format json | jq '.result'
+```
+
+### Run flows with inline JSON
+
+Instead of piping from `stdin` or reading from a JSON file, you can pass the flow JSON directly as a string argument:
+```bash
+uv run lfx run --flow-json '{"data": {"nodes": [...], "edges": [...]}}' \
+ --input-value "Hello world"
+```
+
+### LFX run options
+
+| Option | Description |
+|------------------------------------------------|--------------------------------------------------------------------------------------------------|
+| `--check-variables`/`--no-check-variables` | Validates the flow's global variables. Default: check. |
+| `--flow-json` | Loads inline JSON flow content as a string. |
+| `--format`, `-f` | Output format. Accepts `json`, `text`, `message`, or `result`. Default: `json`. |
+| `--input-value` | Input value to pass to the graph. |
+| `--stdin` | Read JSON flow content from `stdin`. |
+| `--timing` | Include detailed timing information in output. |
+| `--verbose`, `-v` | Show basic progress information and diagnostic output. |
+| `-vv` | Show detailed progress and debug information. |
+| `-vvv` | Show full debugging output including component logs. |
+
+### Use LFX run to create an application
+
+In addition to running flows from JSON files, you can use `lfx run` with Python scripts that define flows programmatically.
+This approach allows you to create flows directly in Python code without the visual builder.
+
+For a complete example of creating an agent flow programmatically using LFX components, see the [Complete Agent Example on PyPI](https://pypi.org/project/lfx/0.1.13/#complete-agent-example).
+
+## Check and update outdated flow components with `lfx check` {#check}
+
+The `lfx check` command checks if a flow JSON file contains outdated components.
+`lfx check` is similar to the [version check feature](/concepts-components#component-versions) available in the Langflow UI, but can be run from the command line.
+
+To check a flow file for outdated components, run the `lfx check` command.
+The command checks the flow for outdated components and displays information about any components that need to be updated.
+If outdated components are found, the command reports them but does not modify the flow file.
+
+ ```bash
+ uv run lfx check simple-agent-flow.json
+ ```
+
+ Result:
+ ```result
+ Built lfx @ file:///Users/mendonkissling/Documents/GitHub/langflow/src/lfx
+ Uninstalled 29 packages in 342ms
+ Installed 29 packages in 49ms
+
+ Checking flow: simple-agent-flow.json
+ Total nodes: 5
+ Outdated components: 0
+ ✅ All components are up to date!
+ ```
+
+To check and automatically apply safe updates to the flow file, include the `--update` flag with the `lfx check` command.
+
+ ```bash
+ uv run lfx check simple-agent-flow.json --update
+ ```
+
+To check and apply all updates, including breaking changes, include the `--force` flag with the `lfx check` command.
+
+ ```bash
+ uv run lfx check simple-agent-flow.json --update --force
+ ```
+
+To check multiple flow files at once, pass them as arguments to the `lfx check` command.
+
+ ```bash
+ uv run lfx check flow1.json flow2.json flow3.json
+ ```
+
+To check a flow interactively, with prompts for each component update, include the `--interactive` flag with the `lfx check` command:
+
+ ```bash
+ uv run lfx check simple-agent-flow.json --interactive
+ ```
+
+To check a flow and save the updates to a new flow file, include the `--output` flag with a file path to a `.json` file.
+
+ ```bash
+ uv run lfx check simple-agent-flow.json --update --output updated-flow.json
+ ```
+
+### LFX check options
+
+| Option | Description |
+|------------------------------------------------|--------------------------------------------------------------------------------------------------|
+| `--update` | Apply safe updates automatically without prompting. |
+| `--force` | Apply all updates including breaking changes. Use with caution and test thoroughly. |
+| `--interactive`, `-i` | Prompt for each component update individually. |
+| `--output`, `-o` | Output file path for the updated flow (defaults to input file when updates are applied). |
+| `--verbose`, `-v` | Show detailed information about component updates and changes. |
\ No newline at end of file
diff --git a/docs/docs/Flows/webhook.mdx b/docs/docs/Flows/webhook.mdx
index dc4533472007..693c0524b356 100644
--- a/docs/docs/Flows/webhook.mdx
+++ b/docs/docs/Flows/webhook.mdx
@@ -88,6 +88,12 @@ For the preceding example, the parsed payload would be a string like `ID: 12345
Typically, you won't manually trigger the **Webhook** component.
To learn about triggering flows with payloads from external applications, see the video tutorial [How to Use Webhooks in Langflow](https://www.youtube.com/watch?v=IC1CAtzFRE0).
+## Require authentication for webhooks {#require-authentication-for-webhooks}
+
+By default, webhooks run as the flow owner without authentication (`LANGFLOW_WEBHOOK_AUTH_ENABLE=False`).
+
+If you want to require API key authentication for webhooks, set `LANGFLOW_WEBHOOK_AUTH_ENABLE=True`.
+
## Troubleshoot flows with Webhook components
Use the following information to help address common issues that can occur with the **Webhook** component.
diff --git a/docs/docs/Support/release-notes.mdx b/docs/docs/Support/release-notes.mdx
index 6d529dd33b19..aab77b2bed59 100644
--- a/docs/docs/Support/release-notes.mdx
+++ b/docs/docs/Support/release-notes.mdx
@@ -47,6 +47,51 @@ To avoid the impact of potential breaking changes and test new versions, the Lan
If you made changes to your flows in the isolated installation, you might want to export and import those flows back to your upgraded primary installation so you don't have to repeat the component upgrade process.
+## 1.7.0
+
+Highlights of this release include the following changes.
+For all changes, see the [Changelog](https://github.com/langflow-ai/langflow/releases).
+
+### New features and enhancements
+
+- Langflow Executor (LFX)
+
+ Langflow Executor (LFX) is a new command-line tool for serving and running flows statelessly from JSON files without requiring the full Langflow UI or database setup. Use `lfx serve` to create lightweight REST API endpoints from your flows, or `lfx run` to execute flows locally and get results immediately. LFX automatically installs flow dependencies and runs flows with minimal overhead. For more information, see [Run flows with Langflow Executor (LFX)](/lfx-stateless-flows).
+
+- Webhook authentication
+
+ Added the `LANGFLOW_WEBHOOK_AUTH_ENABLE` environment variable for authenticating requests to the [`/webhook` endpoint](/api-flows-run#webhook-run-flow). When `LANGFLOW_WEBHOOK_AUTH_ENABLE=TRUE`, webhook endpoints require API key authentication and validate that the authenticated user owns the flow being executed. When `FALSE`, no Langflow API key is required and all requests to the webhook endpoint are treated as being sent by the flow owner.
+
+- Changes to read/write file components
+
+ The **Save File** component was renamed to **Write File**, and it can now save to S3 and Google Drive.
+ The **Read File** component was renamed to **Read File**.
+ Both components support **Tool Mode**.
+
+- New integrations, bundles, and components:
+
+ New filter operator for **DataFrame Operations** component
+
+ The [**DataFrame Operations** component](components-processing#dataframe-operations) now includes a `not contains` filter operator.
+ Use it to clean data by extracting only records that _don't_ contain specific values.
+ For example, you can filter out invalid email addresses that don't contain `@`.
+
+ New JSON operations for **Data Operations** component
+
+ The [**Data Operations** component](components-processing#data-operations) now includes two operations for advanced JSON data manipulation.
+ The **Path Selection** operation extracts values from nested JSON structures, and the **JQ Expression** operation uses the [`jq`](https://jqlang.org/) query language to perform advanced JSON filtering, projections, and transformations.
+
+ New [**Smart Router** component](/components-logic#smart-router)
+
+ New [**Mock Data** component](/components-data#mock-data)
+
+ New [**CometAPI** bundle](/bundles-cometapi)
+
+ New [**Docling Remote VLM** component](/bundles-docling#docling-remote-vlm)
+
+### Deprecations
+
+
## 1.6.0
Highlights of this release include the following changes.
@@ -127,9 +172,9 @@ This is expected behavior.
- Advanced document parsing with built-in Docling support
- The **File** component supports advanced parsing with the Docling library.
+ The **Read File** component supports advanced parsing with the Docling library.
- To make it easier to use the [**Docling** components](/bundles-docling) and the **File** component's new advanced parsing feature, the Docling dependency is now included with Langflow for all operating systems except macOS Intel (x86_64).
+ To make it easier to use the [**Docling** components](/bundles-docling) and the **Read File** component's new advanced parsing feature, the Docling dependency is now included with Langflow for all operating systems except macOS Intel (x86_64).
For more information, see [Advanced parsing](/components-data#advanced-parsing).
diff --git a/docs/docs/Tutorials/chat-with-files.mdx b/docs/docs/Tutorials/chat-with-files.mdx
index aec642dac093..e88aa7f00c05 100644
--- a/docs/docs/Tutorials/chat-with-files.mdx
+++ b/docs/docs/Tutorials/chat-with-files.mdx
@@ -21,7 +21,7 @@ This tutorial uses an OpenAI LLM. If you want to use a different provider, you n
## Create a flow that accepts file input
-To ingest files, your flow must have a **File** component attached to a component that receives input, such as a **Prompt Template** or **Agent** component.
+To ingest files, your flow must have a **Read File** component attached to a component that receives input, such as a **Prompt Template** or **Agent** component.
The following steps modify the **Basic Prompting** template to accept file input:
@@ -47,15 +47,15 @@ To do this, edit the **Template** field, and then replace the default prompt wit
You can use any string to name your template variables.
These strings become the names of the fields (input ports) on the **Prompt Template** component.
- For this tutorial, the variables are named after the components that connect to them: **chat-input** for the **Chat Input** component and **file** for the **File** component.
+ For this tutorial, the variables are named after the components that connect to them: **chat-input** for the **Chat Input** component and **file** for the **Read File** component.
:::
-5. Add a [**File** component](/components-data#file) to the flow, and then connect the **Raw Content** output port to the **Prompt Template** component's **file** input port.
+5. Add a [**Read File** component](/components-data#file) to the flow, and then connect the **Raw Content** output port to the **Prompt Template** component's **file** input port.
To connect ports, click and drag from one port to the other.
- You can add files directly to the **File** component to pre-load input before running the flow, or you can load files at runtime. The next section of this tutorial covers runtime file uploads.
+ You can add files directly to the **Read File** component to pre-load input before running the flow, or you can load files at runtime. The next section of this tutorial covers runtime file uploads.
- At this point your flow has five components. The **Chat Input** and **File** components are connected to the **Prompt Template** component's input ports. Then, the **Prompt Template** component's output port is connected to the **Language Model** component's input port. Finally, the **Language Model** component's output port is connected to the **Chat Output** component, which returns the final response to the user.
+ At this point your flow has five components. The **Chat Input** and **Read File** components are connected to the **Prompt Template** component's input ports. Then, the **Prompt Template** component's output port is connected to the **Language Model** component's input port. Finally, the **Language Model** component's output port is connected to the **Chat Output** component, which returns the final response to the user.

@@ -77,7 +77,7 @@ For help with constructing file upload requests in Python, JavaScript, and curl,
* `LANGFLOW_SERVER_ADDRESS`: Your Langflow server's domain. The default value is `127.0.0.1:7860`. You can get this value from the code snippets on your flow's [**API access** pane](/concepts-publish#api-access).
* `FLOW_ID`: Your flow's UUID or custom endpoint name. You can get this value from the code snippets on your flow's [**API access** pane](/concepts-publish#api-access).
- * `FILE_COMPONENT_ID`: The UUID of the **File** component in your flow, such as `File-KZP68`. To find the component ID, open your flow in Langflow, click the **File** component, and then click **Controls**. The component ID is at the top of the **Controls** pane.
+ * `FILE_COMPONENT_ID`: The UUID of the **Read File** component in your flow, such as `File-KZP68`. To find the component ID, open your flow in Langflow, click the **Read File** component, and then click **Controls**. The component ID is at the top of the **Controls** pane.
* `CHAT_INPUT`: The message you want to send to the Chat Input of your flow, such as `Evaluate this resume for a job opening in my Marketing department.`
* `FILE_NAME` and `FILE_PATH`: The name and path to the local file that you want to send to your flow.
* `LANGFLOW_API_KEY`: A valid [Langflow API key](/api-keys-and-authentication).
@@ -144,7 +144,7 @@ For help with constructing file upload requests in Python, JavaScript, and curl,
The first request uploads a file, such as `fake-resume.txt`, to your Langflow server at the `/v2/files` endpoint. This request returns a file path that can be referenced in subsequent Langflow requests, such as `02791d46-812f-4988-ab1c-7c430214f8d5/fake-resume.txt`
The second request sends a chat message to the Langflow flow at the `/v1/run/` endpoint.
- The `tweaks` parameter includes the path to the uploaded file as the variable `uploaded_path`, and sends this file directly to the **File** component.
+ The `tweaks` parameter includes the path to the uploaded file as the variable `uploaded_path`, and sends this file directly to the **Read File** component.
3. Save and run the script to send the requests and test the flow.
@@ -190,9 +190,9 @@ To continue building on this tutorial, try these next steps.
### Process multiple files loaded at runtime
-To process multiple files in a single flow run, add a separate **File** component for each file you want to ingest. Then, modify your script to upload each file, retrieve each returned file path, and then pass a unique file path to each **File** component ID.
+To process multiple files in a single flow run, add a separate **Read File** component for each file you want to ingest. Then, modify your script to upload each file, retrieve each returned file path, and then pass a unique file path to each **Read File** component ID.
-For example, you can modify `tweaks` to accept multiple **File** components.
+For example, you can modify `tweaks` to accept multiple **Read File** components.
The following code is just an example; it isn't working code:
```python
@@ -211,7 +211,7 @@ def chat_with_flow(input_message, file_paths):
tweaks[component_id] = {"path": file_path}
```
-You can also use a [**Directory** component](/components-data#directory) to load all files in a directory or pass an archive file to the **File** component.
+You can also use a [**Directory** component](/components-data#directory) to load all files in a directory or pass an archive file to the **Read File** component.
### Upload external files at runtime
@@ -219,10 +219,10 @@ To upload files from another machine that isn't your local environment, your Lan
### Preload files outside the chat session
-You can use the **File** component to load files anywhere in a flow, not just in a chat session.
+You can use the **Read File** component to load files anywhere in a flow, not just in a chat session.
-In the visual editor, you can preload files to the **File** component by selecting them from your local machine or [Langflow file management](/concepts-file-management).
+In the visual editor, you can preload files to the **Read File** component by selecting them from your local machine or [Langflow file management](/concepts-file-management).
For example, you can preload an instructions file for a prompt template, or you can preload a vector store with documents that you want to query in a Retrieval Augmented Generation (RAG) flow.
-For more information about the **File** component and other data loading components, see [Data components](/components-data).
\ No newline at end of file
+For more information about the **Read File** component and other data loading components, see [Data components](/components-data).
\ No newline at end of file
diff --git a/docs/docs/Tutorials/chat-with-rag.mdx b/docs/docs/Tutorials/chat-with-rag.mdx
index a1dd7f1b00d5..4e98885abe72 100644
--- a/docs/docs/Tutorials/chat-with-rag.mdx
+++ b/docs/docs/Tutorials/chat-with-rag.mdx
@@ -40,7 +40,7 @@ This tutorial demonstrates how you can use Langflow to create a chatbot applicat
3. Optional: Replace both **Astra DB** vector store components with a **Chroma DB** or another vector store component of your choice.
This tutorial uses Chroma DB.
- The **Load Data Flow** should have **File**, **Split Text**, **Embedding Model**, vector store (such as **Chroma DB**), and **Chat Output** components:
+ The **Load Data Flow** should have **Read File**, **Split Text**, **Embedding Model**, vector store (such as **Chroma DB**), and **Chat Output** components:

@@ -62,7 +62,7 @@ In situations where many users load data or you need to load data programmatical
-1. In your RAG chatbot flow, click the **File** component, and then click **File**.
+1. In your RAG chatbot flow, click the **Read File** component, and then click **File**.
2. Select the local file you want to upload, and then click **Open**.
The file is loaded to your Langflow server.
3. To load the data into your vector database, click the vector store component, and then click **Run component** to run the selected component and all prior dependent components.
@@ -140,6 +140,7 @@ This tutorial uses JavaScript for demonstration purposes.
const readline = require('readline');
const { LangflowClient } = require('@datastax/langflow-client');
+ # pragma: allowlist nextline secret
const API_KEY = 'LANGFLOW_API_KEY';
const SERVER = 'LANGFLOW_SERVER_ADDRESS';
const FLOW_ID = 'FLOW_ID';
diff --git a/docs/docs/_partial-basic-component-structure.mdx b/docs/docs/_partial-basic-component-structure.mdx
new file mode 100644
index 000000000000..9d4bb37ab36a
--- /dev/null
+++ b/docs/docs/_partial-basic-component-structure.mdx
@@ -0,0 +1,70 @@
+1. Create a Python file for your component, such as `dataframe_processor.py`.
+
+2. Write your component as an object of the [`Component`](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/custom/custom_component/component.py) class. Create a new class that inherits from `Component` and override the base class's methods.
+
+ :::tip Backwards compatibility
+ The `lfx` import path replaced the `import from langflow.custom import Component` in Langflow 1.7, but the original input is still compatible and works the same way.
+ :::
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+ ```
+
+3. Define class attributes to provide information about your custom component:
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+
+ display_name: str = "DataFrame Processor"
+ description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
+ documentation: str = "https://docs.langflow.org/components-dataframe-processor"
+ icon: str = "DataframeIcon"
+ priority: int = 100
+ name: str = "dataframe_processor"
+ ```
+
+ * `display_name`: A user-friendly name shown in the visual editor.
+ * `description`: A brief description of what your component does.
+ * `documentation`: A link to detailed documentation.
+ * `icon`: An emoji or icon identifier for visual representation.
+ Langflow uses [Lucide](https://lucide.dev/icons) for icons. To assign an icon to your component, set the icon attribute to the name of a Lucide icon as a string, such as `icon = "file-text"`. Langflow renders icons from the Lucide library automatically.
+ For more information, see [Contributing bundles](/contributing-bundles#add-the-bundle-to-the-frontend-folder).
+ * `priority`: An optional integer to control display order. Lower numbers appear first.
+ * `name`: An optional internal identifier that defaults to class name.
+4. Define the component's interface by specifying its inputs, outputs, and the method that will process them. The method name must match the `method` field in your outputs list, as this is how Langflow knows which method to call to generate each output.
+
+ This example creates a minimal custom component skeleton.
+
+ ```python
+ from typing import Any, Dict, Optional
+ import pandas as pd
+ from lfx.custom.custom_component.component import Component
+
+ class DataFrameProcessor(Component):
+ """A component that processes pandas DataFrames with various operations."""
+
+ display_name: str = "DataFrame Processor"
+ description: str = "Process and transform pandas DataFrames with various operations like filtering, sorting, and aggregation."
+ documentation: str = "https://docs.langflow.org/components-dataframe-processor"
+ icon: str = "DataframeIcon"
+ priority: int = 100
+ name: str = "dataframe_processor"
+
+ # input and output lists
+ inputs = []
+ outputs = []
+
+ # method
+ def some_output_method(self):
+ return ...
+ ```
diff --git a/docs/docs/_partial-vector-rag-flow.mdx b/docs/docs/_partial-vector-rag-flow.mdx
index 95efab2c822e..16af219302f3 100644
--- a/docs/docs/_partial-vector-rag-flow.mdx
+++ b/docs/docs/_partial-vector-rag-flow.mdx
@@ -48,7 +48,7 @@ For example, if your embedding model has a token limit of 512, then the **Chunk
5. In the **Language Model** component, enter your OpenAI API key, or select a different provider and model to use for the chat portion of the flow.
6. Run the **Load Data** subflow to populate your vector store.
-In the **File** component, select one or more files, and then click **Run component** on the vector store component in the **Load Data** subflow.
+In the **Read File** component, select one or more files, and then click **Run component** on the vector store component in the **Load Data** subflow.
The **Load Data** subflow loads files from your local machine, chunks them, generates embeddings for the chunks, and then stores the chunks and their embeddings in the vector database.
diff --git a/docs/sidebars.js b/docs/sidebars.js
index 395c5d50c123..85518d8fc06d 100644
--- a/docs/sidebars.js
+++ b/docs/sidebars.js
@@ -71,6 +71,11 @@ module.exports = {
id: "Flows/concepts-flows-import",
label: "Import and export flows"
},
+ {
+ type: "doc",
+ id: "Flows/lfx",
+ label: "Run flows with Langflow Executor (LFX)"
+ },
],
},
{
@@ -305,6 +310,7 @@ module.exports = {
"Components/bundles-clickhouse",
"Components/bundles-cloudflare",
"Components/bundles-cohere",
+ "Components/bundles-cometapi",
"Components/bundles-couchbase",
"Components/bundles-datastax",
"Components/bundles-deepseek",
@@ -425,9 +431,9 @@ module.exports = {
"Contributing/contributing-community",
"Contributing/contributing-how-to-contribute",
"Contributing/contributing-components",
+ "Contributing/contributing-bundles",
"Contributing/contributing-component-tests",
"Contributing/contributing-templates",
- "Contributing/contributing-bundles",
],
},
{
diff --git a/docs/static/img/agent-component.png b/docs/static/img/agent-component.png
index 795c2c7756e6..cc99d74a1bf1 100644
Binary files a/docs/static/img/agent-component.png and b/docs/static/img/agent-component.png differ
diff --git a/docs/static/img/agent-example-add-chat.png b/docs/static/img/agent-example-add-chat.png
index 3e5439f2d46a..5387fab24fc4 100644
Binary files a/docs/static/img/agent-example-add-chat.png and b/docs/static/img/agent-example-add-chat.png differ
diff --git a/docs/static/img/agent-example-add-tools.png b/docs/static/img/agent-example-add-tools.png
index 3293dd51e799..a76ef843ae21 100644
Binary files a/docs/static/img/agent-example-add-tools.png and b/docs/static/img/agent-example-add-tools.png differ
diff --git a/docs/static/img/agent-example-agent-as-tool.png b/docs/static/img/agent-example-agent-as-tool.png
index 0be4a3796756..34ba81319940 100644
Binary files a/docs/static/img/agent-example-agent-as-tool.png and b/docs/static/img/agent-example-agent-as-tool.png differ
diff --git a/docs/static/img/agent-example-run-flow-as-tool.png b/docs/static/img/agent-example-run-flow-as-tool.png
index 1578e2b2b8e7..52b53be30017 100644
Binary files a/docs/static/img/agent-example-run-flow-as-tool.png and b/docs/static/img/agent-example-run-flow-as-tool.png differ
diff --git a/docs/static/img/api-pane.png b/docs/static/img/api-pane.png
index 63542e9f2393..8ea721ed615c 100644
Binary files a/docs/static/img/api-pane.png and b/docs/static/img/api-pane.png differ
diff --git a/docs/static/img/component-astra-db-json-tool.png b/docs/static/img/component-astra-db-json-tool.png
index 48bdfa4c5612..26c60280d69e 100644
Binary files a/docs/static/img/component-astra-db-json-tool.png and b/docs/static/img/component-astra-db-json-tool.png differ
diff --git a/docs/static/img/component-data-operations-select-key.png b/docs/static/img/component-data-operations-select-key.png
index d391a47addeb..1c964885a16c 100644
Binary files a/docs/static/img/component-data-operations-select-key.png and b/docs/static/img/component-data-operations-select-key.png differ
diff --git a/docs/static/img/component-groq.png b/docs/static/img/component-groq.png
index b6aeea14ed9d..490cad5d88ff 100644
Binary files a/docs/static/img/component-groq.png and b/docs/static/img/component-groq.png differ
diff --git a/docs/static/img/component-ollama-embeddings-chromadb.png b/docs/static/img/component-ollama-embeddings-chromadb.png
index c4765f7eaa43..b2cb95e51aa3 100644
Binary files a/docs/static/img/component-ollama-embeddings-chromadb.png and b/docs/static/img/component-ollama-embeddings-chromadb.png differ
diff --git a/docs/static/img/component-ollama-model.png b/docs/static/img/component-ollama-model.png
index d19c5bc54302..b2cb95e51aa3 100644
Binary files a/docs/static/img/component-ollama-model.png and b/docs/static/img/component-ollama-model.png differ
diff --git a/docs/static/img/connect-data-components-to-agent.png b/docs/static/img/connect-data-components-to-agent.png
index 72dbc89d4974..ff9bb150fe52 100644
Binary files a/docs/static/img/connect-data-components-to-agent.png and b/docs/static/img/connect-data-components-to-agent.png differ
diff --git a/docs/static/img/ds-lf-docs.png b/docs/static/img/ds-lf-docs.png
deleted file mode 100644
index 46fc70429c86..000000000000
Binary files a/docs/static/img/ds-lf-docs.png and /dev/null differ
diff --git a/docs/static/img/ds-lf-zoom.png b/docs/static/img/ds-lf-zoom.png
deleted file mode 100644
index 53f78b4c616b..000000000000
Binary files a/docs/static/img/ds-lf-zoom.png and /dev/null differ
diff --git a/docs/static/img/hero.png b/docs/static/img/hero.png
deleted file mode 100644
index 3118ea3ecb43..000000000000
Binary files a/docs/static/img/hero.png and /dev/null differ
diff --git a/docs/static/img/integrations.png b/docs/static/img/integrations.png
deleted file mode 100644
index 8ded830775ea..000000000000
Binary files a/docs/static/img/integrations.png and /dev/null differ
diff --git a/docs/static/img/my-projects.png b/docs/static/img/my-projects.png
index 7c64d6002cd8..ddcee08614de 100644
Binary files a/docs/static/img/my-projects.png and b/docs/static/img/my-projects.png differ
diff --git a/docs/static/img/playground-response.png b/docs/static/img/playground-response.png
deleted file mode 100644
index 15f7cc4cfb57..000000000000
Binary files a/docs/static/img/playground-response.png and /dev/null differ
diff --git a/docs/static/img/prompt-component-with-multiple-inputs.png b/docs/static/img/prompt-component-with-multiple-inputs.png
index ca736500038c..29b47ae6d41a 100644
Binary files a/docs/static/img/prompt-component-with-multiple-inputs.png and b/docs/static/img/prompt-component-with-multiple-inputs.png differ
diff --git a/docs/static/img/prompt-component.png b/docs/static/img/prompt-component.png
index d56c3cd58bff..29b47ae6d41a 100644
Binary files a/docs/static/img/prompt-component.png and b/docs/static/img/prompt-component.png differ
diff --git a/docs/static/img/quickstart-simple-agent-flow.png b/docs/static/img/quickstart-simple-agent-flow.png
index 8f855dbb33b4..2e5fc57006c5 100644
Binary files a/docs/static/img/quickstart-simple-agent-flow.png and b/docs/static/img/quickstart-simple-agent-flow.png differ
diff --git a/docs/static/img/workspace-basic-prompting.png b/docs/static/img/workspace-basic-prompting.png
index c13ab8a26840..d9b4325ada2d 100644
Binary files a/docs/static/img/workspace-basic-prompting.png and b/docs/static/img/workspace-basic-prompting.png differ
diff --git a/docs/static/img/workspace.png b/docs/static/img/workspace.png
index 0d13ca2e82f3..eab12987d1f2 100644
Binary files a/docs/static/img/workspace.png and b/docs/static/img/workspace.png differ