Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
dcb26ee
langflow-webhook-auth-enable
mendonk Oct 2, 2025
60ca49e
add-not-contains-filter-operator
mendonk Oct 2, 2025
8b8cf22
does-not-contains-operator
mendonk Oct 2, 2025
6ba6126
less-redundant-explanation
mendonk Oct 2, 2025
282165c
docs: add jq and path selection to data operations (#10083)
mendonk Oct 2, 2025
c64f9ef
Merge branch 'main' into docs-1.7-release
mendonk Oct 2, 2025
e8685bf
Merge branch 'main' into docs-1.7-release
mendonk Oct 6, 2025
f6d3a8b
smart transform historical names
aimurphy Oct 6, 2025
2643550
Merge branch 'main' into docs-1.7-release
aimurphy Oct 6, 2025
cea2d8b
change back to smart transform
aimurphy Oct 6, 2025
24ba1b6
jq expression capitalization/package name
aimurphy Oct 6, 2025
04afb78
small edit for clarity of not contains operator
aimurphy Oct 6, 2025
26897ab
Merge branch 'main' into docs-1.7-release
aimurphy Oct 6, 2025
9d35b93
read/write file component name changes
aimurphy Oct 6, 2025
71becc9
docs: add smart router component (#10097)
mendonk Oct 7, 2025
9374bf0
Merge branch 'main' into docs-1.7-release
mendonk Oct 7, 2025
10d293c
Merge branch 'main' into docs-1.7-release
mendonk Oct 7, 2025
47fc582
docs: screenshot audit (#10166)
mendonk Oct 9, 2025
d86cba8
docs: component paths updates for lfx (#10130)
mendonk Oct 14, 2025
5499647
docs: auto-add projects as MCP servers (#10096)
mendonk Oct 14, 2025
2f171e0
Merge branch 'main' into docs-1.7-release
mendonk Oct 15, 2025
845020d
Merge branch 'main' into docs-1.7-release
mendonk Oct 15, 2025
45bc439
docs: amazon bedrock converse (#10289)
mendonk Oct 15, 2025
653f2c8
docs 1.7 release: add mock data component (#10288)
mendonk Oct 15, 2025
4e467c5
Merge branch 'main' into docs-1.7-release
mendonk Oct 17, 2025
28a384c
Merge branch 'main' into docs-1.7-release
mendonk Oct 28, 2025
5e7eb83
docs: update custom component docs (#10323)
mendonk Oct 28, 2025
055f992
Merge branch 'main' into docs-1.7-release
mendonk Oct 29, 2025
9250a5f
docs: add cometapi back for 1.7 release (#10445)
mendonk Oct 29, 2025
83dc0fc
Merge branch 'main' into docs-1.7-release
mendonk Nov 3, 2025
e329cba
docs: add back docling remote vlm for release 1.7 (#10489)
mendonk Nov 3, 2025
514ef3c
Merge branch 'main' into docs-1.7-release
mendonk Nov 3, 2025
642edd3
Merge branch 'main' into docs-1.7-release
mendonk Nov 4, 2025
8c89672
add-content
mendonk Nov 4, 2025
091d0d8
add-entry-to-release-notes
mendonk Nov 4, 2025
4e3730c
lfx-check-usage
mendonk Nov 17, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions docs/docs/API-Reference/api-files.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ To send image files to your flows through the API, see [Upload image files (v1)]
:::

This endpoint uploads files to your Langflow server's file management system.
To use an uploaded file in a flow, send the file path to a flow with a [**File** component](/components-data#file).
To use an uploaded file in a flow, send the file path to a flow with a [**Read File** component](/components-data#file).

The default file limit is 1024 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables).

Expand Down Expand Up @@ -265,10 +265,10 @@ The default file limit is 1024 MB. To configure this value, change the `LANGFLOW
}
```

2. To use this file in your flow, add a **File** component to your flow.
2. To use this file in your flow, add a **Read File** component to your flow.
This component loads files into flows from your local machine or Langflow file management.

3. Run the flow, passing the `path` to the `File` component in the `tweaks` object:
3. Run the flow, passing the `path` to the `Read-File` component in the `tweaks` object:

```text
curl --request POST \
Expand All @@ -280,7 +280,7 @@ This component loads files into flows from your local machine or Langflow file m
"output_type": "chat",
"input_type": "text",
"tweaks": {
"File-1olS3": {
"Read-File-1olS3": {
"path": [
"07e5b864-e367-4f52-b647-a48035ae7e5e/3a290013-fe1e-4d3d-a454-cacae81288f3.pdf"
]
Expand All @@ -289,7 +289,7 @@ This component loads files into flows from your local machine or Langflow file m
}'
```

To get the `File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.
To get the `Read-File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor.

If the file path is valid, the flow runs successfully.

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/API-Reference/api-monitor.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,9 @@ For more information, see the following:

The Vertex build endpoints (`/monitor/builds`) are exclusively for **Playground** functionality.

When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L143). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.
When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L130). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component.

The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.
The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built.

### Get Vertex builds

Expand Down
19 changes: 16 additions & 3 deletions docs/docs/Agents/mcp-server.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,20 @@ For information about using Langflow as an MCP client and managing MCP server co

## Serve flows as MCP tools {#select-flows-to-serve}

Each [Langflow project](/concepts-flows#projects) has an MCP server that exposes the project's flows as tools for use by MCP clients.
When you create a [Langflow project](/concepts-flows#projects), Langflow automatically adds the project to your MCP server's configuration and makes the project's flows available as MCP tools.

By default, all flows in a project are exposed as tools on the project's MCP server.
You can change the exposed flows and tool metadata by managing the MCP server settings:
If your Langflow server has authentication enabled (`AUTO_LOGIN=false`), the project's MCP server is automatically configured with API key authentication, and a new API key is generated specifically for accessing the new project's flows.
For more information, see [MCP server authentication](#authentication).


### Prevent automatic MCP server configuration for Langflow projects

To disable automatic MCP server configuration for new projects, set the `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` environment variable to `false`.
For more information, see [MCP server environment variables](#mcp-server-environment-variables).

### Selectively enable and disable MCP servers for Langflow projects

With or without automatic MCP server configuration enabled, you can selectively enable and disable the projects that are exposed as MCP tools:

1. Click the **MCP Server** tab on the [**Projects** page](/concepts-flows#projects), or, when editing a flow, click **Share**, and then select **MCP Server**.

Expand Down Expand Up @@ -207,6 +217,8 @@ For more information, see the MCP documentation for your client, such as [Cursor

Each [Langflow project](/concepts-flows#projects) has its own MCP server with its own MCP server authentication settings.

When you create a new project, Langflow automatically configures authentication for the project's MCP server based on your Langflow server's authentication settings. If authentication is enabled (`AUTO_LOGIN=false`), the project is automatically configured with API key authentication, and a new API key is generated for accessing the project's flows.

To configure authentication for a Langflow MCP server, go to the **Projects** page in Langflow, click the **MCP Server** tab, click <Icon name="Fingerprint" aria-hidden="true"/> **Edit Auth**, and then select your preferred authentication method:

<Tabs groupId="auth-type">
Expand Down Expand Up @@ -287,6 +299,7 @@ The following environment variables set behaviors related to your Langflow proje
| `LANGFLOW_MCP_SERVER_ENABLE_PROGRESS_NOTIFICATIONS` | Boolean | `False` | If `true`, Langflow MCP servers send progress notifications. |
| `LANGFLOW_MCP_SERVER_TIMEOUT` | Integer | `20` | The number of seconds to wait before an MCP server operation expires due to poor connectivity or long-running requests. |
| `LANGFLOW_MCP_MAX_SESSIONS_PER_SERVER` | Integer | `10` | Maximum number of MCP sessions to keep per unique server. |
| `LANGFLOW_ADD_PROJECTS_TO_MCP_SERVERS` | Boolean | `True` | Whether to automatically add newly created projects to the user's MCP servers configuration. If `false`, projects must be manually added to MCP servers. |

{/* The anchor on this section (deploy-your-server-externally) is currently a link target in the Langflow UI. Do not change. */}
### Deploy your Langflow MCP server externally {#deploy-your-server-externally}
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-aiml.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This page describes the components that are available in the **AI/ML** bundle.
## AI/ML API text generation

This component creates a `ChatOpenAI` model instance using the AI/ML API.
The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Function** component.
The output is exclusively a **Language Model** ([`LanguageModel`](/data-types#languagemodel)) that you can connect to another LLM-driven component, such as a **Smart Transform** component.

For more information, see the [AI/ML API Langflow integration documentation](https://docs.aimlapi.com/integrations/langflow) and [Language model components](/components-models).

Expand Down
51 changes: 37 additions & 14 deletions docs/docs/Components/bundles-amazon.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,34 +10,39 @@ import PartialParams from '@site/docs/_partial-hidden-params.mdx';

This page describes the components that are available in the **Amazon** bundle.

## Amazon Bedrock
## Amazon Bedrock Converse

This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock).
This component generates text using [Amazon Bedrock LLMs](https://docs.aws.amazon.com/bedrock) with the Bedrock Converse API.

It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatBedrock`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.
Specifically, the **Language Model** output is an instance of [`ChatBedrockConverse`](https://docs.langchain.com/oss/python/integrations/chat/bedrock) configured according to the component's parameters.

Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use an Amazon Bedrock model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models).

### Amazon Bedrock parameters
### Amazon Bedrock Converse parameters

<PartialParams />

| Name | Type | Description |
|------|------|-------------|
| input | String | Input parameter. The input string for text generation. |
| input_value | String | Input parameter. The input string for text generation. |
| system_message | String | Input parameter. A system message to pass to the model. |
| stream | Boolean | Input parameter. Whether to stream the response. Only works in chat. Default: `false`. |
| model_id | String | Input parameter. The Amazon Bedrock model to use. |
| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. |
| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. |
| aws_session_token | SecretString | Input parameter. The session key for your AWS account. |
| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. |
| model_id | String | Input parameter. The Amazon Bedrock model to use.|
| aws_access_key_id | SecretString | Input parameter. AWS Access Key for authentication. Required. |
| aws_secret_access_key | SecretString | Input parameter. AWS Secret Key for authentication. Required. |
| aws_session_token | SecretString | Input parameter. The session key for your AWS account. Only needed for temporary credentials. |
| credentials_profile_name | String | Input parameter. Name of the AWS credentials profile to use. If not provided, the default profile will be used. |
| region_name | String | Input parameter. AWS region where your Bedrock resources are located. Default: `us-east-1`. |
| model_kwargs | Dictionary | Input parameter. Additional keyword arguments to pass to the model. |
| endpoint_url | String | Input parameter. Custom endpoint URL for a Bedrock service. |
| temperature | Float | Input parameter. Controls randomness in output. Higher values make output more random. Default: `0.7`. |
| max_tokens | Integer | Input parameter. Maximum number of tokens to generate. Default: `4096`. |
| top_p | Float | Input parameter. Nucleus sampling parameter. Controls diversity of output. Default: `0.9`. |
| top_k | Integer | Input parameter. Limits the number of highest probability vocabulary tokens to consider. Note: Not all models support top_k. Default: `250`. |
| disable_streaming | Boolean | Input parameter. If True, disables streaming responses. Useful for batch processing. Default: `false`. |
| additional_model_fields | Dictionary | Input parameter. Additional model-specific parameters for fine-tuning behavior. |

## Amazon Bedrock Embeddings

Expand All @@ -62,7 +67,7 @@ For more information about using embedding model components in flows, see [Embed
## S3 Bucket Uploader

The **S3 Bucket Uploader** component uploads files to an Amazon S3 bucket.
It is designed to process `Data` input from a **File** or **Directory** component.
It is designed to process `Data` input from a **Read File** or **Directory** component.
If you upload `Data` from other components, test the results before running the flow in production.

Requires the `boto3` package, which is included in your Langflow installation.
Expand All @@ -81,4 +86,22 @@ The component produces logs but it doesn't emit output to the flow.
| **Strategy for file upload** | String | Input parameter. The file upload strategy. **Store Data** (default) iterates over `Data` inputs, logs the file path and text content, and uploads each file to the specified S3 bucket if both file path and text content are available. **Store Original File** iterates through the list of data inputs, retrieves the file path from each data item, uploads the file to the specified S3 bucket if the file path is available, and logs the file path being uploaded. |
| **Data Inputs** | Data | Input parameter. The `Data` input to iterate over and upload as files in the specified S3 bucket. |
| **S3 Prefix** | String | Input parameter. Optional prefix (folder path) within the S3 bucket where files will be uploaded. |
| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |
| **Strip Path** | Boolean | Input parameter. Whether to strip the file path when uploading. Default: `false`. |

## Legacy Amazon components

import PartialLegacy from '@site/docs/_partial-legacy.mdx';

<PartialLegacy />

The following Amazon components are in legacy status:

<details>
<summary>Amazon Bedrock</summary>

The **Amazon Bedrock** component was deprecated in favor of the **Amazon Bedrock Converse** component, which uses the Bedrock Converse API for conversation handling.

To use Amazon Bedrock models in your flows, use the [**Amazon Bedrock Converse**](#amazon-bedrock-converse) component instead.


</details>
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ The **Anthropic** component generates text using Anthropic Chat and Language mod
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`ChatAnthropic`](https://docs.langchain.com/oss/python/integrations/chat/anthropic) configured according to the component's parameters.

Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use an Anthropic model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models).

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ This component generates text using [Azure OpenAI LLMs](https://learn.microsoft.
It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).
Specifically, the **Language Model** output is an instance of [`AzureChatOpenAI`](https://docs.langchain.com/oss/python/integrations/chat/azure_chat_openai) configured according to the component's parameters.

Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use an Azure OpenAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models).

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-baidu.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,6 @@ The **Qianfan** component generates text using Qianfan's language models.

It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).

Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use a Qianfan model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models) and the [Qianfan documentation](https://github.com/baidubce/bce-qianfan-sdk).
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-cohere.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ This component generates text using Cohere's language models.

It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).

Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use a Cohere model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models).

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/Components/bundles-deepseek.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ The **DeepSeek** component generates text using DeepSeek's language models.

It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)).

Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component.
Use the **Language Model** output when you want to use a DeepSeek model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component.

For more information, see [Language model components](/components-models).

Expand Down
Loading
Loading