diff --git a/docs/docs/API-Reference/api-files.mdx b/docs/docs/API-Reference/api-files.mdx index 48b8abfb4856..5c1a04a6ac97 100644 --- a/docs/docs/API-Reference/api-files.mdx +++ b/docs/docs/API-Reference/api-files.mdx @@ -5,6 +5,9 @@ slug: /api-files Use the `/files` endpoints to move files between your local machine and Langflow. +All `/files` endpoints (both `/v1/files` and `/v2/files`) require authentication with a Langflow API key. +You can only access files that belong to your own user account, even as a superuser. + ## Differences between `/v1/files` and `/v2/files` There are two versions of the `/files` endpoints. @@ -235,7 +238,7 @@ To send image files to your flows through the API, see [Upload image files (v1)] ::: This endpoint uploads files to your Langflow server's file management system. -To use an uploaded file in a flow, send the file path to a flow with a [**File** component](/components-data#file). +To use an uploaded file in a flow, send the file path to a flow with a [**Read File** component](/read-file). The default file limit is 1024 MB. To configure this value, change the `LANGFLOW_MAX_FILE_SIZE_UPLOAD` [environment variable](/environment-variables). @@ -265,10 +268,10 @@ The default file limit is 1024 MB. To configure this value, change the `LANGFLOW } ``` -2. To use this file in your flow, add a **File** component to your flow. +2. To use this file in your flow, add a **Read File** component to your flow. This component loads files into flows from your local machine or Langflow file management. -3. Run the flow, passing the `path` to the `File` component in the `tweaks` object: +3. Run the flow, passing the `path` to the `Read-File` component in the `tweaks` object: ```text curl --request POST \ @@ -280,7 +283,7 @@ This component loads files into flows from your local machine or Langflow file m "output_type": "chat", "input_type": "text", "tweaks": { - "File-1olS3": { + "Read-File-1olS3": { "path": [ "07e5b864-e367-4f52-b647-a48035ae7e5e/3a290013-fe1e-4d3d-a454-cacae81288f3.pdf" ] @@ -289,7 +292,7 @@ This component loads files into flows from your local machine or Langflow file m }' ``` - To get the `File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor. + To get the `Read-File` component's ID, call the [Read flow](/api-flows#read-flow) endpoint or inspect the component in the visual editor. If the file path is valid, the flow runs successfully. diff --git a/docs/docs/API-Reference/api-flows-run.mdx b/docs/docs/API-Reference/api-flows-run.mdx index 2cc6646aa507..fbc78631beb1 100644 --- a/docs/docs/API-Reference/api-flows-run.mdx +++ b/docs/docs/API-Reference/api-flows-run.mdx @@ -175,7 +175,7 @@ curl -X POST \ Use the `/webhook` endpoint to start a flow by sending an HTTP `POST` request. :::tip -After you add a [**Webhook** component](/components-data#webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook curl** tab to get an automatically generated `POST /webhook` request for your flow. +After you add a [**Webhook** component](/webhook) to a flow, open the [**API access** pane](/concepts-publish), and then click the **Webhook curl** tab to get an automatically generated `POST /webhook` request for your flow. For more information, see [Trigger flows with webhooks](/webhook). ::: diff --git a/docs/docs/API-Reference/api-monitor.mdx b/docs/docs/API-Reference/api-monitor.mdx index b0b62136db35..0f42edc6a17e 100644 --- a/docs/docs/API-Reference/api-monitor.mdx +++ b/docs/docs/API-Reference/api-monitor.mdx @@ -18,9 +18,9 @@ For more information, see the following: The Vertex build endpoints (`/monitor/builds`) are exclusively for **Playground** functionality. -When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L143). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component. +When you run a flow in the **Playground**, Langflow calls the `/build/$FLOW_ID/flow` endpoint in [chat.py](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/api/v1/chat.py#L130). This call retrieves the flow data, builds a graph, and executes the graph. As each component (or node) is executed, the `build_vertex` function calls `build_and_run`, which may call the individual components' `def_build` method, if it exists. If a component doesn't have a `def_build` function, the build still returns a component. -The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/backend/base/langflow/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built. +The `build` function allows components to execute logic at runtime. For example, the [**Recursive Character Text Splitter** component](https://github.com/langflow-ai/langflow/blob/main/src/lfx/src/lfx/components/langchain_utilities/recursive_character.py) is a child of the `LCTextSplitterComponent` class. When text needs to be processed, the parent class's `build` method is called, which creates a `RecursiveCharacterTextSplitter` object and uses it to split the text according to the defined parameters. The split text is then passed on to the next component. This all occurs when the component is built. ### Get Vertex builds diff --git a/docs/docs/API-Reference/api-reference-api-examples.mdx b/docs/docs/API-Reference/api-reference-api-examples.mdx index 83b79c583280..718fd993ece0 100644 --- a/docs/docs/API-Reference/api-reference-api-examples.mdx +++ b/docs/docs/API-Reference/api-reference-api-examples.mdx @@ -173,12 +173,14 @@ curl -X GET \ ### Get configuration -Returns configuration details for your Langflow deployment: +Returns configuration details for your Langflow deployment. +Requires a [Langflow API key](/api-keys-and-authentication). ```bash curl -X GET \ "$LANGFLOW_SERVER_URL/api/v1/config" \ - -H "accept: application/json" + -H "accept: application/json" \ + -H "x-api-key: $LANGFLOW_API_KEY" ```
@@ -313,6 +315,7 @@ Other endpoints are helpful for specific use cases, such as administration and f * POST `/v1/custom_component`: Build a custom component from code and return its node. * POST `/v1/custom_component/update`: Update an existing custom component's build config and outputs. * POST `/v1/validate/code`: Validate a Python code snippet for a custom component. + * POST `/v1/validate/prompt`: Validate a prompt payload. @@ -364,14 +367,22 @@ The following endpoints are most often used when contributing to the Langflow co * MCP servers: The following endpoints are for managing Langflow MCP servers and MCP server connections. They aren't typically called directly; instead, they are used to drive internal functionality in the Langflow frontend and when running flows that call MCP servers. - * HEAD `/v1/mcp/sse`: Health check for MCP SSE. - * GET `/v1/mcp/sse`: Open SSE stream for MCP server events. - * POST `/v1/mcp/`: Post messages to the MCP server. +Langflow MCP servers support both streamable HTTP and SSE transport. + * HEAD `/v1/mcp/streamable`: Health check for streamable HTTP MCP. + * GET `/v1/mcp/streamable`: Open streamable HTTP connection for MCP server. + * POST `/v1/mcp/streamable`: Post messages to the MCP server via streamable HTTP. + * DELETE `/v1/mcp/streamable`: Close streamable HTTP connection. + * HEAD `/v1/mcp/sse` (LEGACY): Health check for MCP SSE. + * GET `/v1/mcp/sse` (LEGACY): Open SSE stream for MCP server events. + * POST `/v1/mcp/` (LEGACY): Post messages to the MCP server. * GET `/v1/mcp/project/{project_id}`: List MCP-enabled tools and project auth settings. - * HEAD `/v1/mcp/project/{project_id}/sse`: Health check for project SSE. - * GET `/v1/mcp/project/{project_id}/sse`: Open project-scoped MCP SSE. - * POST `/v1/mcp/project/{project_id}`: Post messages to project MCP server. - * POST `/v1/mcp/project/{project_id}/` (trailing slash): Same as above. + * HEAD `/v1/mcp/project/{project_id}/streamable`: Health check for project streamable HTTP MCP. + * GET `/v1/mcp/project/{project_id}/streamable`: Open project-scoped streamable HTTP connection. + * POST `/v1/mcp/project/{project_id}/streamable`: Post messages to project MCP server via streamable HTTP. + * DELETE `/v1/mcp/project/{project_id}/streamable`: Close project streamable HTTP connection. + * HEAD `/v1/mcp/project/{project_id}/sse` (LEGACY): Health check for project SSE. + * GET `/v1/mcp/project/{project_id}/sse` (LEGACY): Open project-scoped MCP SSE. + * POST `/v1/mcp/project/{project_id}` (LEGACY): Post messages to project MCP server. * PATCH `/v1/mcp/project/{project_id}`: Update MCP settings for flows and project auth settings. * POST `/v1/mcp/project/{project_id}/install`: Install MCP client config for Cursor/Windsurf/Claude (local only). * GET `/v1/mcp/project/{project_id}/installed`: Check which clients have MCP config installed. @@ -381,6 +392,7 @@ They aren't typically called directly; instead, they are used to drive internal * POST `/v1/custom_component`: Build a custom component from code and return its node. * POST `/v1/custom_component/update`: Update an existing custom component's build config and outputs. * POST `/v1/validate/code`: Validate a Python code snippet for a custom component. + * POST `/v1/validate/prompt`: Validate a prompt payload. diff --git a/docs/docs/Agents/agents-tools.mdx b/docs/docs/Agents/agents-tools.mdx index 7007acd1208c..d3c269ab2751 100644 --- a/docs/docs/Agents/agents-tools.mdx +++ b/docs/docs/Agents/agents-tools.mdx @@ -198,7 +198,7 @@ inputs = [ ## Use flows as tools -An agent can use your other flows as tools with the [**Run Flow** component](/components-logic#run-flow). +An agent can use your other flows as tools with the [**Run Flow** component](/run-flow). 1. Add a **Run Flow** component to your flow. 2. Select the flow you want the agent to use as a tool. diff --git a/docs/docs/Agents/agents.mdx b/docs/docs/Agents/agents.mdx index a3796f7b406f..bf8cfdf1f7c6 100644 --- a/docs/docs/Agents/agents.mdx +++ b/docs/docs/Agents/agents.mdx @@ -32,7 +32,7 @@ For more information, see [Agent component parameters](#agent-component-paramete 4. Enter a valid credential for your selected model provider. Make sure that the credential has permission to call the selected model. -5. Add [**Chat Input** and **Chat Output** components](/components-io) to your flow, and then connect them to the **Agent** component. +5. Add [**Chat Input** and **Chat Output** components](/chat-input-and-output) to your flow, and then connect them to the **Agent** component. At this point, you have created a basic LLM-based chat flow that you can test in the @@ -338,9 +353,9 @@ The default address is `http://localhost:6274`. - **Transport Type**: Select **SSE**. - - **URL**: Enter the Langflow MCP server's `sse` endpoint. For example: + - **URL**: Enter the Langflow MCP server's endpoint. For example: ```bash - http://localhost:7860/api/v1/mcp/project/d359cbd4-6fa2-4002-9d53-fa05c645319c/sse + http://localhost:7860/api/v1/mcp/project/d359cbd4-6fa2-4002-9d53-fa05c645319c/streamable ``` diff --git a/docs/docs/Components/api-request.mdx b/docs/docs/Components/api-request.mdx new file mode 100644 index 000000000000..c14eed47a251 --- /dev/null +++ b/docs/docs/Components/api-request.mdx @@ -0,0 +1,40 @@ +--- +title: API Request +slug: /api-request +--- + +import Icon from "@site/src/components/icon"; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; +import PartialParams from '@site/docs/_partial-hidden-params.mdx'; +import PartialDevModeWindows from '@site/docs/_partial-dev-mode-windows.mdx'; + +The **API Request** component constructs and sends HTTP requests using URLs or curl commands: + +* **URL mode**: Enter one or more comma-separated URLs, and then select the method for the request to each URL. +* **curl mode**: Enter the curl command to execute. + +You can enable additional request options and fields in the component's parameters. + +Returns a [`Data` object](/data-types#data) containing the response. + +For provider-specific API components, see
diff --git a/docs/docs/Components/bundles-groq.mdx b/docs/docs/Components/bundles-groq.mdx index f82e590620ee..28a6fbdf2449 100644 --- a/docs/docs/Components/bundles-groq.mdx +++ b/docs/docs/Components/bundles-groq.mdx @@ -18,7 +18,7 @@ This component generates text using Groq's language models. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Specifically, the **Language Model** output is an instance of [`ChatGroq`](https://docs.langchain.com/oss/python/integrations/chat/groq) configured according to the component's parameters. -Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a Groq model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-huggingface.mdx b/docs/docs/Components/bundles-huggingface.mdx index e038bc440f9f..1a97f4637fb3 100644 --- a/docs/docs/Components/bundles-huggingface.mdx +++ b/docs/docs/Components/bundles-huggingface.mdx @@ -20,7 +20,7 @@ Authentication is required. This component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Specifically, the **Language Model** output is an instance of [`ChatHuggingFace`](https://docs.langchain.com/oss/python/integrations/chat/huggingface) configured according to the component's parameters. -Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a Hugging Face model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-ibm.mdx b/docs/docs/Components/bundles-ibm.mdx index 2042b31cc35c..28957d93f427 100644 --- a/docs/docs/Components/bundles-ibm.mdx +++ b/docs/docs/Components/bundles-ibm.mdx @@ -45,7 +45,7 @@ You can use the **IBM watsonx.ai** component anywhere you need a language model The **IBM watsonx.ai** component can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use an IBM watsonx.ai model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). The `LanguageModel` output from the **IBM watsonx.ai** component is an instance of `[ChatWatsonx](https://docs.langchain.com/oss/python/integrations/chat/ibm_watsonx)` configured according to the [component's parameters](#ibm-watsonxai-parameters). diff --git a/docs/docs/Components/bundles-langchain.mdx b/docs/docs/Components/bundles-langchain.mdx index 588609abbfc3..596c5e3dee28 100644 --- a/docs/docs/Components/bundles-langchain.mdx +++ b/docs/docs/Components/bundles-langchain.mdx @@ -106,7 +106,7 @@ For more information, see the [LangChain SQL agent documentation](https://docs.l The LangChain **SQL Database** component establishes a connection to an SQL database. -This component is different from the [**SQL Database** core component](/components-data#sql-database), which executes SQL queries on SQLAlchemy-compatible databases. +This component is different from the [**SQL Database** core component](/sql-database), which executes SQL queries on SQLAlchemy-compatible databases. ## Text Splitters @@ -183,4 +183,4 @@ The following LangChain components are in legacy status: * **Vector Store Info/Agent** * **VectorStoreRouterAgent** -To replace these components, consider other components in the **LangChain** bundle or general Langflow components, such as the [**Agent** component](/components-agents) or the [**SQL Database** component](/components-data#sql-database). \ No newline at end of file +To replace these components, consider other components in the **LangChain** bundle or general Langflow components, such as the [**Agent** component](/components-agents) or the [**SQL Database** component](/sql-database). \ No newline at end of file diff --git a/docs/docs/Components/bundles-lmstudio.mdx b/docs/docs/Components/bundles-lmstudio.mdx index a027f4be5f48..0e5fe18c99cf 100644 --- a/docs/docs/Components/bundles-lmstudio.mdx +++ b/docs/docs/Components/bundles-lmstudio.mdx @@ -17,7 +17,7 @@ The **LM Studio** component generates text using LM Studio's local language mode It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use an LM Studio model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-maritalk.mdx b/docs/docs/Components/bundles-maritalk.mdx index f4d60ec4cbcb..e01499dc1912 100644 --- a/docs/docs/Components/bundles-maritalk.mdx +++ b/docs/docs/Components/bundles-maritalk.mdx @@ -18,7 +18,7 @@ The **MariTalk** component generates text using MariTalk LLMs. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a MariTalk model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-mem0.mdx b/docs/docs/Components/bundles-mem0.mdx index 7ed3b994594f..3e946b0771d8 100644 --- a/docs/docs/Components/bundles-mem0.mdx +++ b/docs/docs/Components/bundles-mem0.mdx @@ -34,6 +34,6 @@ The **Mem0 Chat Memory** component retrieves and stores chat messages using Mem0 The **Mem0 Chat Memory** component can output either **Mem0 Memory** ([`Memory`](/data-types#memory)) or **Search Results** ([`Data`](/data-types#data)). You can select the output type near the component's output port. -Use **Mem0 Chat Memory** for memory storage and retrieval operations with the [**Message History** component](/components-helpers#message-history). +Use **Mem0 Chat Memory** for memory storage and retrieval operations with the [**Message History** component](/message-history). Use the **Search Results** output to retrieve specific memories based on a search query. \ No newline at end of file diff --git a/docs/docs/Components/bundles-mistralai.mdx b/docs/docs/Components/bundles-mistralai.mdx index b6fcd9365c76..2a829649c9b8 100644 --- a/docs/docs/Components/bundles-mistralai.mdx +++ b/docs/docs/Components/bundles-mistralai.mdx @@ -18,7 +18,7 @@ The **MistralAI** component generates text using MistralAI LLMs. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a MistralAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-novita.mdx b/docs/docs/Components/bundles-novita.mdx index fcd43a40877c..8ffe5e79e428 100644 --- a/docs/docs/Components/bundles-novita.mdx +++ b/docs/docs/Components/bundles-novita.mdx @@ -16,7 +16,7 @@ This component generates text using [Novita's language models](https://novita.ai It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Function** component. +Use the **Language Model** output when you want to use a Novita model as the LLM for another LLM-driven component, such as a **Language Model** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-nvidia.mdx b/docs/docs/Components/bundles-nvidia.mdx index 54a4b037e501..682f2af25c2e 100644 --- a/docs/docs/Components/bundles-nvidia.mdx +++ b/docs/docs/Components/bundles-nvidia.mdx @@ -71,7 +71,7 @@ For more information about using embedding model components in flows, see [Embed :::tip Tokenization considerations Be aware of your embedding model's chunk size limit. Tokenization errors can occur if your text chunks are too large. -For more information, see [Tokenization errors due to chunk size](/components-processing#chunk-size). +For more information, see [Tokenization errors due to chunk size](/split-text#chunk-size). ::: ## NVIDIA Rerank @@ -153,7 +153,7 @@ For more information, see the [NV-Ingest documentation](https://nvidia.github.io | extract_infographics | Extract Infographics | Extract infographics from document. Default: `false`. | | text_depth | Text Depth | The level at which text is extracted. Options: 'document', 'page', 'block', 'line', 'span'. Default: `page`. | | split_text | Split Text | Split text into smaller chunks. Default: `true`. | -| chunk_size | Chunk Size | The number of tokens per chunk. Default: `500`. Make sure the chunk size is compatible with your embedding model. For more information, see [Tokenization errors due to chunk size](/components-processing#chunk-size). | +| chunk_size | Chunk Size | The number of tokens per chunk. Default: `500`. Make sure the chunk size is compatible with your embedding model. For more information, see [Tokenization errors due to chunk size](/split-text#chunk-size). | | chunk_overlap | Chunk Overlap | Number of tokens to overlap from previous chunk. Default: `150`. | | filter_images | Filter Images | Filter images (see advanced options for filtering criteria). Default: `false`. | | min_image_size | Minimum Image Size Filter | Minimum image width/length in pixels. Default: `128`. | diff --git a/docs/docs/Components/bundles-ollama.mdx b/docs/docs/Components/bundles-ollama.mdx index 65c016b57648..8ffff1fa034e 100644 --- a/docs/docs/Components/bundles-ollama.mdx +++ b/docs/docs/Components/bundles-ollama.mdx @@ -32,7 +32,7 @@ To use the **Ollama** component in a flow, connect Langflow to your locally runn 5. Connect the **Ollama** component to other components in the flow, depending on how you want to use the model. - Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. For more information, see [Language model components](/components-models). + Language model components can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). Use the **Language Model** output when you want to use an Ollama model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). In the following example, the flow uses `LanguageModel` output to use an Ollama model as the LLM for an [**Agent** component](/components-agents). diff --git a/docs/docs/Components/bundles-openai.mdx b/docs/docs/Components/bundles-openai.mdx index 6e98e92df8d6..bbf735bb8f31 100644 --- a/docs/docs/Components/bundles-openai.mdx +++ b/docs/docs/Components/bundles-openai.mdx @@ -20,7 +20,7 @@ It provides access to the same OpenAI models that are available in the core **La It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a specific OpenAI model configuration as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-openrouter.mdx b/docs/docs/Components/bundles-openrouter.mdx index e35c1c52782f..ae67e0d0d165 100644 --- a/docs/docs/Components/bundles-openrouter.mdx +++ b/docs/docs/Components/bundles-openrouter.mdx @@ -18,7 +18,7 @@ This component generates text using OpenRouter's unified API for multiple AI mod It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use an OpenRouter model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-perplexity.mdx b/docs/docs/Components/bundles-perplexity.mdx index c46fd1230f1f..309c02d35eca 100644 --- a/docs/docs/Components/bundles-perplexity.mdx +++ b/docs/docs/Components/bundles-perplexity.mdx @@ -18,7 +18,7 @@ This component generates text using Perplexity's language models. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a Perplexity model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-redis.mdx b/docs/docs/Components/bundles-redis.mdx index 880a6a384a41..e8cd279a0f48 100644 --- a/docs/docs/Components/bundles-redis.mdx +++ b/docs/docs/Components/bundles-redis.mdx @@ -16,7 +16,7 @@ The **Redis Chat Memory** component retrieves and stores chat messages using Red Chat memories are passed between memory storage components as the [`Memory`](/data-types#memory) data type. -For more information about using external chat memory in flows, see the [**Message History** component](/components-helpers#message-history). +For more information about using external chat memory in flows, see the [**Message History** component](/message-history). ### Redis Chat Memory parameters diff --git a/docs/docs/Components/bundles-sambanova.mdx b/docs/docs/Components/bundles-sambanova.mdx index 3bfc695e072b..37baeaca6bd9 100644 --- a/docs/docs/Components/bundles-sambanova.mdx +++ b/docs/docs/Components/bundles-sambanova.mdx @@ -18,7 +18,7 @@ This component generates text using SambaNova LLMs. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a SambaNova model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-searchapi.mdx b/docs/docs/Components/bundles-searchapi.mdx index 6ddf1981d13c..1984f1c4baf9 100644 --- a/docs/docs/Components/bundles-searchapi.mdx +++ b/docs/docs/Components/bundles-searchapi.mdx @@ -33,7 +33,7 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe). ## See also -* [**Web Search** component](/components-data#web-search) +* [**Web Search** component](/web-search) * [**Google** bundle](/bundles-google) * [**Bing** bundle](/bundles-bing) * [**DuckDuckGo** bundle](/bundles-duckduckgo) \ No newline at end of file diff --git a/docs/docs/Components/bundles-serper.mdx b/docs/docs/Components/bundles-serper.mdx index 7a3d343ca636..b3899ecc93f0 100644 --- a/docs/docs/Components/bundles-serper.mdx +++ b/docs/docs/Components/bundles-serper.mdx @@ -27,7 +27,7 @@ It returns a list of search results as a [`DataFrame`](/data-types#dataframe). ## See also -* [**Web Search** component](/components-data#web-search) +* [**Web Search** component](/web-search) * [**Google** bundle](/bundles-google) * [**Bing** bundle](/bundles-bing) * [**DuckDuckGo** bundle](/bundles-duckduckgo) \ No newline at end of file diff --git a/docs/docs/Components/bundles-vertexai.mdx b/docs/docs/Components/bundles-vertexai.mdx index deda5a04ea5b..70cd4430aa87 100644 --- a/docs/docs/Components/bundles-vertexai.mdx +++ b/docs/docs/Components/bundles-vertexai.mdx @@ -20,7 +20,7 @@ The **Vertex AI** component generates text using Google Vertex AI models. It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use a Vertex AI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/bundles-wikipedia.mdx b/docs/docs/Components/bundles-wikipedia.mdx index 7a0b04828607..dd6a56f5a5fb 100644 --- a/docs/docs/Components/bundles-wikipedia.mdx +++ b/docs/docs/Components/bundles-wikipedia.mdx @@ -40,4 +40,4 @@ This component searches and retrieves information from Wikipedia with the [WikiM ## See also -* [**API Request** component](/components-data#api-request) \ No newline at end of file +* [**API Request** component](/api-request) \ No newline at end of file diff --git a/docs/docs/Components/bundles-xai.mdx b/docs/docs/Components/bundles-xai.mdx index 896db788a093..01bc99026ad3 100644 --- a/docs/docs/Components/bundles-xai.mdx +++ b/docs/docs/Components/bundles-xai.mdx @@ -18,7 +18,7 @@ The **xAI** component generates text using xAI models like [Grok](https://x.ai/g It can output either a **Model Response** ([`Message`](/data-types#message)) or a **Language Model** ([`LanguageModel`](/data-types#languagemodel)). -Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Function** component. +Use the **Language Model** output when you want to use an xAI model as the LLM for another LLM-driven component, such as an **Agent** or **Smart Transform** component. For more information, see [Language model components](/components-models). diff --git a/docs/docs/Components/calculator.mdx b/docs/docs/Components/calculator.mdx new file mode 100644 index 000000000000..fe77b152bb5e --- /dev/null +++ b/docs/docs/Components/calculator.mdx @@ -0,0 +1,20 @@ +--- +title: Calculator +slug: /calculator +--- + +import Icon from "@site/src/components/icon"; +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +The **Calculator** component performs basic arithmetic operations on mathematical expressions. +It supports addition, subtraction, multiplication, division, and exponentiation operations. + +For an example of using this component in a flow, see the [**Python Interpreter** component](/python-interpreter). + +## Calculator parameters + +| Name | Type | Description | +|------|------|-------------| +| expression | String | Input parameter. The arithmetic expression to evaluate, such as `4*4*(33/22)+12-20`. | +| result | Data | Output parameter. The calculation result as a [`Data` object](/data-types) containing the evaluated expression. | \ No newline at end of file diff --git a/docs/docs/Components/components-io.mdx b/docs/docs/Components/chat-input-and-output.mdx similarity index 73% rename from docs/docs/Components/components-io.mdx rename to docs/docs/Components/chat-input-and-output.mdx index 87afab3348e3..60d763c412da 100644 --- a/docs/docs/Components/components-io.mdx +++ b/docs/docs/Components/chat-input-and-output.mdx @@ -1,21 +1,11 @@ --- -title: Input / Output -slug: /components-io +title: Chat Input and Output +slug: /chat-input-and-output --- import Icon from "@site/src/components/icon"; import PartialParams from '@site/docs/_partial-hidden-params.mdx'; -Input and output components define where data enters and exits your flow, but they don't have identical functionality. - -Specifically, **Chat Input and Output** components are designed to facilitate conversational interactions where messages are exchanged in a cumulative dialogue. -The data handled by these components includes the message text plus additional metadata like senders, session IDs, and timestamps. - -In contrast, **Text Input and Output** components are designed for simple string input and output that doesn't require the additional context and metadata associated with chat messages. -The data handled by these components is pared down to basic text strings. - -## Chat Input and Output {#chat-io} - :::warning **Chat Input and Output** components are required to chat with your flow in the **Playground**. For more information, see [Test flows in the Playground](/concepts-playground). @@ -23,14 +13,14 @@ For more information, see [Test flows in the Playground](/concepts-playground). **Chat Input and Output** components are designed to handle conversational interactions in Langflow. -### Chat Input +## Chat Input The **Chat Input** component accepts text and file input, such as a chat message or a file. This data is passed to other components as [`Message` data](/data-types) containing the provided input as well as associated chat metadata, such as the sender, session ID, timestamp, and file attachments. Initial input should _not_ be provided as a complete `Message` object because the **Chat Input** component constructs the `Message` object that is then passed to other components in the flow. -#### Chat Input parameters +### Chat Input parameters @@ -71,7 +61,7 @@ message = await Message.create( -### Chat Output +## Chat Output The **Chat Output** component ingests `Message`, `Data`, or `DataFrame` data from other components, transforms it into `Message` data if needed, and then emits the final output as a chat message. For information about these data types, see [Use Langflow data types](/data-types). @@ -83,7 +73,7 @@ When using the Langflow API, the API response includes the **Chat Output** `Mess Langflow API responses can be extremely verbose, so your applications must include code to extract relevant data from the response to return to the user. For an example, see the [Langflow quickstart](/get-started-quickstart). -#### Chat Output parameters +### Chat Output parameters @@ -102,7 +92,7 @@ For an example, see the [Langflow quickstart](/get-started-quickstart). For information about the resulting `Message` object, including input parameters that are directly mapped to `Message` attributes, see [`Message` data](/data-types#message). -### Use Chat Input and Output components in a flow +## Use Chat Input and Output components in a flow To use the **Chat Input** and **Chat Output** components in a flow, connect them to components that accept or emit [`Message` data](/data-types#message). @@ -153,34 +143,4 @@ curl --request POST \ }' ``` -For more information, see [Trigger flows with the Langflow API](/concepts-publish). - -## Text Input and Output {#text-io} - -:::warning -**Text Input and Output** components aren't supported in the **Playground**. -Because the data isn't formatted as a chat message, the data doesn't appear in the **Playground**, and you can't chat with your flow in the **Playground**. - -If you want to chat with a flow in the **Playground**, you must use the [**Chat Input and Output** components](#chat-io). -::: - -**Text Input and Output** components are designed for flows that ingest or emit simple text strings. -These components don't support full conversational interactions. - -Passing chat-like metadata to a **Text Input and Output** component doesn't change the component's behavior; the result is still a simple text string. - -### Text Input - -The **Text Input** component accepts a text string input that is passed to other components as [`Message` data](/data-types) containing only the provided input text string in the `text` attribute. - -It accepts only **Text** (`input_value`), which is the text supplied as input to the component. -This can be entered directly into the component or passed as `Message` data from other components. - -Initial input _shouldn't_ be provided as a complete `Message` object because the **Text Input** component constructs the `Message` object that is then passed to other components in the flow. - -### Text Output - -The **Text Output** component ingests [`Message` data](/data-types#message) from other components, emitting only the `text` attribute in a simplified `Message` object. - -It accepts only **Text** (`input_value`), which is the text to be ingested and output as a string. -This can be entered directly into the component or passed as `Message` data from other components. \ No newline at end of file +For more information, see [Trigger flows with the Langflow API](/concepts-publish). \ No newline at end of file diff --git a/docs/docs/Components/components-agents.mdx b/docs/docs/Components/components-agents.mdx index bb3c850433f8..50897adb59b9 100644 --- a/docs/docs/Components/components-agents.mdx +++ b/docs/docs/Components/components-agents.mdx @@ -5,14 +5,14 @@ slug: /components-agents import PartialAgentsWork from '@site/docs/_partial-agents-work.mdx'; -Langflow's **Agent** and **MCP Tools** components are critical for building agent flows. -These components define the behavior and capabilities of AI agents in your flows. +Langflow's **Agent** component is critical for building agent flows. +This component defines the behavior and capabilities of AI agents in your flows. ## Examples of agent flows -For examples of flows using the **Agent** and **MCP Tools** components, see the following: +For examples of flows using the **Agent** component, see the following: * [Langflow quickstart](/get-started-quickstart): Start with the **Simple Agent** template, modify its tools, and then learn how to use an agent flow in an application. @@ -21,7 +21,7 @@ For examples of flows using the **Agent** and **MCP Tools** components, see the * [Use an agent as a tool](/agents-tools#use-an-agent-as-a-tool): Create a multi-agent flow. -* [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server): Use the **Agent** and **MCP Tools** components to implement the Model Context Protocol (MCP) in your flows. +* [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server): Use the **Agent** and [**MCP Tools** component](/mcp-tools) to implement the Model Context Protocol (MCP) in your flows. ## Agent component {#agent-component} @@ -29,30 +29,14 @@ The **Agent** component is the primary agent actor in your agent flows. This component uses an LLM integration to respond to input, such as a chat message or file upload. The agent can use the tools already available in the base LLM as well as additional tools that you connect to the **Agent** component's **Tools** port. -You can connect any Langflow component as a tool, including other **Agent** components and MCP servers through the [**MCP Tools** component](#mcp-connection). +You can connect any Langflow component as a tool, including other **Agent** components and MCP servers through the [**MCP Tools** component](/mcp-tools). For more information about using this component, see [Use Langflow agents](/agents). -## MCP Tools component {#mcp-connection} - -The **MCP Tools** component connects to a Model Context Protocol (MCP) server and exposes the MCP server's functions as tools for Langflow agents to use to respond to input. - -In addition to publicly available MCP servers and your own custom-built MCP servers, you can connect Langflow MCP servers, which allow your agent to use your Langflow flows as tools. -To do this, use the **MCP Tools** component's [SSE mode](/mcp-client#mcp-sse-mode) to connect to your Langflow project's MCP server at the `/api/v1/mcp/sse` endpoint. - -For more information, see [Use Langflow as an MCP client](/mcp-client) and [Use Langflow as an MCP server](/mcp-server). - -
-Earlier versions of the MCP Tools component - -* In Langflow version 1.5, the **MCP Connection** component was renamed to the **MCP Tools** component. -* In Langflow version 1.3, the **MCP Tools (stdio)** and **MCP Tools (SSE)** components were removed and replaced by the unified **MCP Connection** component, which was later renamed to **MCP Tools**. - -
- ## See also -* [**Message History** component](/components-helpers#message-history) +* [**MCP Tools** component](/mcp-tools) +* [**Message History** component](/message-history) * [Store chat memory](/memory#store-chat-memory) * [Bundles](/components-bundle-components) * [Legacy LangChain components](/bundles-langchain#legacy-langchain-components) \ No newline at end of file diff --git a/docs/docs/Components/components-bundles.mdx b/docs/docs/Components/components-bundles.mdx index a394ec839ec6..5ff036e9b5bb 100644 --- a/docs/docs/Components/components-bundles.mdx +++ b/docs/docs/Components/components-bundles.mdx @@ -224,7 +224,7 @@ The following parameters are available in **Retrieve** mode: Zep Chat Memory The **Zep Chat Memory** component is a legacy component. -Replace this component with the [**Message History** component](/components-helpers#message-history). +Replace this component with the [**Message History** component](/message-history). This component creates a `ZepChatMessageHistory` instance, enabling storage and retrieval of chat messages using Zep, a memory server for LLMs. diff --git a/docs/docs/Components/components-custom-components.mdx b/docs/docs/Components/components-custom-components.mdx index d58e30547a7b..25ac6ab09204 100644 --- a/docs/docs/Components/components-custom-components.mdx +++ b/docs/docs/Components/components-custom-components.mdx @@ -6,128 +6,300 @@ slug: /components-custom-components import Icon from "@site/src/components/icon"; import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +import PartialBasicComponentStructure from '../_partial-basic-component-structure.mdx'; -Custom components extend Langflow's functionality through Python classes that inherit from `Component`. This enables integration of new features, data manipulation, external services, and specialized tools. +Create your own custom components to add any functionality you need to Langflow, from API integrations to data processing. -In Langflow's node-based environment, each node is a "component" that performs discrete functions. Custom components are Python classes which define: +In Langflow's node-based environment, each node is a "component" that performs discrete functions. +Custom components in Langflow are built upon: -* **Inputs** — Data or parameters your component requires. -* **Outputs** — Data your component provides to downstream nodes. -* **Logic** — How you process inputs to produce outputs. +* The Python class that inherits from `Component`. +* Class-level attributes that identify and describe the component. +* [Input and output lists](#inputs-and-outputs) that determine data flow. +* Methods that define the component's behavior and logic. +* Internal variables for [Error handling and logging](#error-handling-and-logging) -The benefits of creating custom components include unlimited extensibility, reusability, automatic field generation in the visual editor based on inputs, and type-safe connections between nodes. +Use the [Custom component quickstart](#quickstart) to add an example component to Langflow, and then use the reference guide that follows for more advanced component customization. -Create custom components for performing specialized tasks, calling APIs, or adding advanced logic. +## Custom component quickstart {#quickstart} -Custom components in Langflow are built upon: +Create a custom `DataFrameProcessor` component by creating a Python file, saving it in the correct folder, including an `__init__.py` file, and loading it into Langflow. -* The Python class that inherits from `Component`. -* Class-level attributes that identify and describe the component. -* Input and output lists that determine data flow. -* Internal variables for logging and advanced logic. +### Create a Python file -## Class-level attributes + -Define these attributes to control a custom component's appearance and behavior: +### Save the custom component {#custom-component-path} -```python -class MyCsvReader(Component): - display_name = "CSV Reader" - description = "Reads CSV files" - icon = "file-text" - name = "CSVReader" - documentation = "http://docs.example.com/csv_reader" +Save the custom component in the Langflow directory where the UI will discover and load it. + +By default, Langflow looks for custom components in the `src/lfx/src/lfx/components` directory. + +When saving components in the default directory, components must be organized in a specific directory structure to be properly loaded and displayed in the visual editor. + +Components must be placed inside category folders, not directly in the base directory. + +The category folder name determines where the component appears in the Langflow