-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Fix: Corrects 404 error with MCP when a reverse proxy adds a base path #9804
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
* fix: Avoid namespace collision for Astra * [autofix.ci] apply automated fixes * Update Vector Store RAG.json * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…-ai#9569) fix: revert to stable composio version
* fix: Knowledge base component refactor * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * Update styleUtils.ts * Update ingestion.py * [autofix.ci] apply automated fixes * Fix ingestion of df * [autofix.ci] apply automated fixes * Update Knowledge Ingestion.json * Fix one failing test * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes * Revert composio versions for CI * Revert "Revert composio versions for CI" This reverts commit 9bcb694. --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Edwin Jose <[email protected]> Co-authored-by: Carlos Coelho <[email protected]>
fix .env load on windows script Co-authored-by: Ítalo Johnny <[email protected]>
…ent (langflow-ai#9564) Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…fault (langflow-ai#9550) * Use custom voice assistant on chat input * Changed mcp composer to default disabled --------- Co-authored-by: Carlos Coelho <[email protected]>
…flow-ai#9571) fix: Use newest file component in RAG
* refactor: clean up imports and improve code readability in AIML and FlowSidebar components - Organized import statements in aiml.py and index.tsx for better structure. - Enhanced formatting in aiml.py for the update_build_config method. - Updated nodeIconToDisplayIconMap in styleUtils.ts for consistency in AIML label. - Removed unnecessary console log in FlowSidebarComponent for cleaner code. * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Carlos Coelho <[email protected]>
…low-ai#9572) fix: Allow updates to file component in templates
…ngflow-ai#9575) fix filtering so legacy components aren't shown by default
…langflow-ai#9589) * Changed Name to Slug, added Close button * Updated data test id * Tested closing the sidebar * fixed test * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
* Added stick to bottom dependency * Removed scroll direction dependency * Added scroll to bottom action in voice assistant and chat input * Made messages occupy full width * Changed chat view to use StickToBottom instead of previous scroll handling mechanism * Delete unused chat scroll anchor * Set initial as instant * Update session name styling
Deleted google serper api core
…ow-ai#9587) * Added onDelete prop to sidebarDraggableComponent * Removed unused props * Added handleDeleteMcpServer * Add tests for on delete functionality, fixed linting errors * Format * Add test on mcp-server to test adding and deleting mcp server from sidebar * Adds data test id to delete select item * Changed data test id to not be mcp related * Added delete confirmation modal to mcp sidebar group * Changed test to contain modal
…t, change zoom out logic in canvasControls (langflow-ai#9595) * Fix zoom out to 0.6 instead of 1.0 * remove min zoom on canvascontrolsdropdown, since it's enforced by reactflow * Changed min zoom to 0.25 and max zoom to 2.0 * Added tests for zoom in and zoom out in canvas controls dropdown
…ow-ai#9594) * Changed node icon to not have icon color * Added portion of test that checks if color is right for mcp component * Refactor nodeIcon * removed lucideIcon check for performance * Changed the test to evaluate color from computed style
…r mcp-projects logic (langflow-ai#9599) * Add new available field to installed mcps * Disable auto install field when program not present * Refactor logic and get back the Available field for the installed * Added tooltip * Fixed linting
…ai#9617) * fix: Properly allow no OCR engine * [autofix.ci] apply automated fixes * Set default to easyocr * Update docling_inline.py * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…-ai#9644) Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…ray on File component (langflow-ai#9639) * Changed file type order * Changed starter projects that had the file component * order tooltip types alphabetically * changed order of text_file_types * Removed duplicate types * Changed starter projects that used past types * changed test * Fixed data test id * Changed test to expect correct types
…9685) * refactor: improve code structure and add NodeDrawer component - Refactored import statements for better organization in agent.py and dropdownComponent. - Enhanced the AgentComponent's description and memory_inputs formatting for clarity. - Introduced a new NodeDrawer component for improved UI handling in the dropdown. - Updated Dropdown component to integrate NodeDrawer functionality, allowing for side panel interactions. * refactor: simplify NodeDrawer component and enhance Dropdown integration - Removed unnecessary props from NodeDrawer, streamlining its interface. - Updated the Dropdown component to improve the integration of NodeDrawer, ensuring better handling of side panel interactions. - Refactored the NodeDrawer's structure for improved readability and maintainability. * fix * refactor: enhance Dropdown and input components with externalOptions support - Updated Dropdown and related components to incorporate externalOptions for improved flexibility. - Refactored input classes to maintain consistent formatting and readability. - Removed deprecated dialogInputs functionality in favor of the new externalOptions structure. * fix: reorganize imports after cherry-pick resolution * refactor: enhance Dropdown component with loading state and source options - Introduced a loading state to the Dropdown component to indicate when a response is awaited. - Updated the logic to utilize sourceOptions instead of dialogInputs for better clarity and maintainability. - Refactored the rendering of options and associated UI elements to improve user experience. * refactor: improve Dropdown component structure and styling - Cleaned up import statements for better organization. - Enhanced the loading state display and adjusted the layout for better user experience. - Updated styling for CommandItem components to ensure consistent padding and font weight. - Refactored option rendering logic for improved clarity and maintainability. * refactor: reorganize imports and adjust Dropdown component behavior - Moved import statements for better clarity and organization. - Commented out the setOpen function call to modify Dropdown behavior when dialog inputs are present. * refactor: enhance Dropdown component functionality and logging - Removed unnecessary console log for source options. - Introduced handleSourceOptions function to streamline value handling and state management. - Updated onSelect logic to utilize handleSourceOptions for improved clarity and functionality. * refactor: enhance Dropdown component with flow store integration - Added useFlowStore to manage node state within the Dropdown component. - Introduced a new handleSourceOptions function to streamline value handling and API interaction. - Updated onSelect logic to ensure proper value handling when selecting options. * refactor: Update agent component to support custom model connections - Changed the agent component's dropdown input to allow selection of "connect_other_models" for custom model integration. - Enhanced the dropdown options and metadata for better user guidance. - Updated the build configuration to reflect these changes and ensure proper input handling. * refactor: Reorganize imports and enhance dropdown component logic - Moved and re-imported necessary dependencies for clarity. - Updated dropdown rendering logic to improve handling of selected values and loading states. - Ensured compatibility with agent component requirements by refining option checks. * small fix and revert * refactor: Clean up imports and improve dropdown component styling - Removed duplicate imports for PopoverAnchor and Fuse. - Simplified class names in the dropdown component for better readability. - Adjusted layout properties for improved visual consistency. * refactor: Enhance dropdown component functionality and clean up imports - Reorganized imports for better clarity and removed duplicates. - Implemented a new feature to handle "connect_other_models" option, improving the dropdown's interaction with flow store and types store. - Added logic to manage input types and display compatible handles, enhancing user experience. - Updated utility functions for better integration with the dropdown component. * style: format options_metadata in agent component * refactor: Update import statements in starter project JSON files and adjust proxy settings in frontend configuration - Refactored import statements in multiple starter project JSON files to improve readability by breaking long lines. - Changed proxy settings from "http://localhost:7860" to "http://127.0.0.1:7860" in frontend configuration files for consistency and to avoid potential issues with localhost resolution. * [autofix.ci] apply automated fixes * revert and fix * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * fixed dropdown * [autofix.ci] apply automated fixes * kb clean up * [autofix.ci] apply automated fixes (attempt 2/3) * update to templates with display name change --------- Co-authored-by: Edwin Jose <[email protected]> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
* encrypt oauth auth settings at rest * [autofix.ci] apply automated fixes * Fix rebase changes and add env to env server config * Correctly unmask secretstr before encryption * update mcp-composer args * [autofix.ci] apply automated fixes * ruff * ruff * ruff * [autofix.ci] apply automated fixes * ruff * catch invalidtoken error * ruff * [autofix.ci] apply automated fixes * ruff * [autofix.ci] apply automated fixes * ruff * ruff * [autofix.ci] apply automated fixes * ruff * [autofix.ci] apply automated fixes * fix test * Add initial mcp composer service and startup * remove token url * Register server on project creation * WARN: fall back to superuser on no auth params, to allow mcp-composer to connect. also fixes race condition in server creatoin * update sse url args * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * Add langflow api keys to the server configs * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * add port searching * [autofix.ci] apply automated fixes * Fix for dead servers - use devnull on subprocess to avoid pipe from filling up * uvlock * [autofix.ci] apply automated fixes * Update composer startup behavior re: auth settings * [autofix.ci] apply automated fixes * fix some auth logic, add dynamic fetch of new url * Clean up sse-url parameters * [autofix.ci] apply automated fixes * Only call composer url when composer is enabled * [autofix.ci] apply automated fixes * improve shutdown * starter projects update * [autofix.ci] apply automated fixes * update logging git push * revert hack to auth mcp composer * [autofix.ci] apply automated fixes * Fix 500 on composer-url query * [autofix.ci] apply automated fixes * Update feature flag; update api key addition to aut-install * [autofix.ci] apply automated fixes * Fix composer url and re-add auth * Changed needs_api_key logic * Refactor use-get-composer-url * remove python fallback for now, then pipe stderr to pipe * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * Changed api key logic to allow connection if not api key and auto login is off * fix oauth addition to cmd * restart server when auth values change * Restart server on oauth values changes * [autofix.ci] apply automated fixes * Changed project port to be the same as OAuth port * Changed endpoint to provide port_available * add is_port_available prop * Added port_available to request * Edit mutation to not have linting errors * Added port not available state to authentication * [autofix.ci] apply automated fixes * Added port and host to get composer url * Invalidate project composer url queries * Changed to display port and host that is not running * Cleanup old port finding and some mypy fixes * Add print, remove unused env var * Use mcp-composer directly in client and a lot of fixes * changed starter projects * refactor mcp_projects to use always IP generated for WSL * changed to check args -4 too on installed servers * changed to just check if sse url is in args * added member servers in gitignore * add check for ff * Handle secret request response cycle securely and add better logging * Use asycn logger * Add decorator to check if composer is enabled in settings * more logging changes * Much better handling of existing oauth servers when the flag is disabled on restart * Reset oauth projects to apikey or none when composer flag is disabled * fix url for api key auth * Fix auth check; set project auth to api key when auto login disabled * Ruff, comments, cleanup * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * Consolidate the auth handling since its used in two endpoints * [autofix.ci] apply automated fixes * Ruff * [autofix.ci] apply automated fixes * last ruff * Update FE env var naming and dont unnecessarily decrypt auth settings at times * update feature flag usage - remove mcp composer * [autofix.ci] apply automated fixes * Update timeout methods to have more reliable startup * more feature flag changes * Attempt to extract helpful user messages * [autofix.ci] apply automated fixes * Added loading on mcp server tab auth * Changed to load on start too * cleanup mcp composer on project deletion * [autofix.ci] apply automated fixes * remove nested retry mech * Ruff * lint * Fix unit tests * [autofix.ci] apply automated fixes * ruff * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Edwin Jose <[email protected]> Co-authored-by: Lucas Oliveira <[email protected]> Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Carlos Coelho <[email protected]>
Co-authored-by: Carlos Coelho <[email protected]> Co-authored-by: Deon Sanchez <[email protected]>
* Use separate conditional router flag to check if-else branch execution * clean comments * [autofix.ci] apply automated fixes * Ruff --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
* Disable keys when isLocked * Disabled adding and deleting nodes when flow is locked
…ai#9542) * Refactor superuser credential handling for security * [autofix.ci] apply automated fixes * refactor: enhance superuser credential handling in setup process (langflow-ai#9574) * [autofix.ci] apply automated fixes * refactor: streamline superuser flow tests and enhance error handling (langflow-ai#9577) * [autofix.ci] apply automated fixes * None Super user is not allowed hence for a valid string resetting it to * None Super user is not allowed hence for a valid string resetting it to "" * use secret str for password everywhere * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Jordan Frazier <[email protected]>
fix: update version to 1.6.0 in package.json and package-lock.json
* removed left auto from log canvas controls * made initial position be fetched from event for notes * added relative class and put shadow box outside of the div that contains reactflow --------- Co-authored-by: Carlos Coelho <[email protected]>
* Add stop propagation to chat input buttons * Made entire area focus chat when clicked
fix: Restore file component description
…i#9750) * 📝 (projects.py): add logic to separate flows and components from a single query result for better organization and readability 🐛 (projects.py): fix logic to correctly exclude project flows from the list of excluded flows * ✨ (test_projects.py): add tests to ensure project renaming preserves associated flows and components * 📝 (projects.py): remove unnecessary comment to improve code readability and maintainability * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…lization errors (langflow-ai#9596) fix: Filter out None values from headers in URLComponent
Added enable_knowledge_bases feature flag everywhere it's been used
…or 1.6 (langflow-ai#9713) * sidebar fixes * [autofix.ci] apply automated fixes * refactor: update FlowMenu and CanvasControlsDropdown styles, enhance MemoizedCanvasControls with flow lock status * feat: add 'Sticky Notes' functionality to sidebar and enhance note handling - Introduced a new 'add_note' section in the sidebar for adding sticky notes. - Implemented event listeners to manage the add-note flow, allowing for better integration with the sidebar. - Updated styles and structure in various components to accommodate the new feature. - Refactored existing components for improved organization and readability. * fix: adjust button height in FlowSidebarComponent for improved UI consistency * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…#9753) * fix(docling): improve error handling for missing OCR dependencies - Add DoclingDependencyError custom exception - Detect specific OCR engine installation issues - Provide clear installation instructions to users - Suggest OCR disable as alternative solution - Fail fast when dependencies are missing Fixes issue where users received unclear error messages when OCR engines like ocrmac, easyocr, tesserocr, or rapidocr were not properly installed. * fix: prevent missing clean_data attribute error * chore: update starter_projects * [autofix.ci] apply automated fixes * refactor(docling): update dependency error messages to use uv and suggest complete install Address code review feedback by using 'uv pip install' and offering langflow[docling] as alternative * refactor(docling): simplify worker error handling per code review --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
* Added span to buttons to not remove casing * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…-ai#9730) * fix * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
…gflow-ai#9538) * docs: update support documentation to reflect rebranding to IBM Elite Support for Langflow * remove-info-tab * Apply suggestions from code review Co-authored-by: April I. Murphy <[email protected]> --------- Co-authored-by: April I. Murphy <[email protected]>
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR introduces MCP Composer lifecycle/orchestration and auth handling across project APIs, renames/adjusts env flags and Windows scripts for env-file usage, updates docs/branding, bumps version to 1.6.0, revises model/provider constants, refactors knowledge base components, adjusts routing/graph build behavior, improves Docling dependency handling, removes deprecated components/tools, and streamlines chat/agent components. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant API as Projects API
participant Auth as handle_auth_settings_update
participant DB as DB Session
participant Comp as MCPComposerService
User->>API: PATCH /projects/{id} (auth_settings changed)
API->>DB: Load existing project (flows, components)
API->>Auth: handle_auth_settings_update(existing_project, new_auth_settings)
Auth-->>API: {should_start_composer, should_stop_composer, should_handle_composer}
alt should_start_composer
API->>API: register_project_with_composer(project) (background)
else should_stop_composer
API->>Comp: stop(project_id)
end
API-->>User: 200 Updated project
sequenceDiagram
autonumber
participant Client
participant API as MCP Projects API
participant Sec as verify_project_auth[_conditional]
participant Comp as MCP Composer
participant Cfg as get_config_path/URL
Client->>API: GET /{project_id}/composer-url
API->>Sec: verify_project_access/auth
Sec-->>API: User/Project OK
API->>API: should_use_mcp_composer(project)
alt Composer enabled & OAuth
API->>Comp: get_or_start_mcp_composer(auth_config, project)
API->>Cfg: get_composer_sse_url(project)
Cfg-->>API: sse_url
API-->>Client: 200 {enabled:true, sse_url}
else
API-->>Client: 200 {enabled:false}
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120–180 minutes Possibly related PRs
Pre-merge checks (2 passed, 1 warning)❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 25
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (19)
docs/docs/Integrations/Notion/Meeting_Notes_Agent.json (2)
1081-1104: Add “gpt-4.1” to model options
Default value “gpt-4.1” isn’t listed inoptions—include it to prevent UI/validation errors (and optionally add “gpt-4.1-mini”/“gpt-4.1-nano” if supported).
1866-1889: Add missing "gpt-4.1" to the options for the second OpenAI node"options": [ + "gpt-4.1", "gpt-4o-mini", "gpt-4o", "gpt-4-turbo", "gpt-4-turbo-preview", "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-0125" ],src/backend/base/langflow/components/docling/docling_remote.py (1)
171-189: Result ordering bug: processed_data can misalign with file_listYou append Nones during the first loop but push actual results later without placing them at original indices. rollup_data likely expects index-aligned lists.
Apply:
- processed_data: list[Data | None] = [] + processed_data: list[Data | None] = [None] * len(file_list) @@ - futures: list[tuple[int, Future]] = [] - for i, file in enumerate(file_list): - if file.path is None: - processed_data.append(None) - continue - - futures.append((i, executor.submit(_convert_document, client, file.path, docling_options))) + futures: list[tuple[int, Future]] = [] + for i, file in enumerate(file_list): + if not file.path: + continue # already None at index i + futures.append((i, executor.submit(_convert_document, client, file.path, docling_options))) @@ - for _index, future in futures: + for _index, future in futures: try: result_data = future.result() - processed_data.append(result_data) + processed_data[_index] = result_datasrc/backend/base/langflow/base/data/docling_utils.py (1)
123-131: Surface a consistent error_type for missing DoclingInline component currently checks error_type to raise ImportError. Add error_type for ModuleNotFoundError.
Apply:
- except ModuleNotFoundError: + except ModuleNotFoundError: msg = ( "Docling is an optional dependency of Langflow. " "Install with `uv pip install 'langflow[docling]'` " "or refer to the documentation" ) - queue.put({"error": msg}) + queue.put({"error": msg, "error_type": "import_error", "original_exception": "ModuleNotFoundError"}) returnsrc/backend/base/langflow/components/input_output/chat_output.py (1)
21-29: TheSseServerTransportinstantiations in bothmcp.pyandmcp_projects.pystill use the old paths ("/api/v1/mcp/"and"./"respectively), and no updates for the reverse-proxy basePath fix are present. You’ll need to:
- Update
src/backend/base/langflow/api/v1/mcp.py:66to use the new SSE endpoint, e.g.
SseServerTransport("/api/v1/mcp/project/{project_id}/sse").- Update
src/backend/base/langflow/api/v1/mcp_projects.py:189to initialize transports with the correct path, not"./".- Add or update unit tests to cover SSE URL resolution and ensure the fix isn’t regressed.
src/backend/base/langflow/components/milvus/milvus.py (1)
43-46: Typo in user-facing label: “Consistencey Level” → “Consistency Level”Visible UI text; fix to avoid confusing users.
- display_name="Consistencey Level", + display_name="Consistency Level",src/backend/base/langflow/components/olivya/olivya.py (3)
18-25: Do not use a plain text input for API keysThe API key is currently a MessageTextInput, which risks accidental exposure in the UI and traces. Switch to SecretStrInput.
Apply this diff within the inputs block:
- MessageTextInput( + SecretStrInput( name="api_key", display_name="Olivya API Key", info="Your API key for authentication", value="", required=True, - ), + password=True, + ),Also add SecretStrInput to the imports:
-from langflow.io import MessageTextInput, Output +from langflow.io import MessageTextInput, Output, SecretStrInput
86-101: Sanitize logs and use http_err.response safely
- Current logs include full payload and full response objects, which can leak PII (phone numbers, messages).
- Prefer http_err.response for status/body to avoid relying on an outer-scope variable.
Apply this diff to reduce leakage and harden error handling:
- await logger.ainfo("Sending POST request with payload: %s", payload) + await logger.ainfo("Sending Olivya call request") @@ - response_data = response.json() - await logger.ainfo("Request successful: %s", response_data) + response_data = response.json() + await logger.ainfo("Request successful") @@ - except httpx.HTTPStatusError as http_err: - await logger.aexception("HTTP error occurred") - response_data = {"error": f"HTTP error occurred: {http_err}", "response_text": response.text} + except httpx.HTTPStatusError as http_err: + await logger.aexception("HTTP error occurred") + response_text = http_err.response.text if getattr(http_err, "response", None) else None + response_data = {"error": f"HTTP error occurred: {http_err}", "response_text": response_text} @@ - except json.JSONDecodeError as json_err: + except json.JSONDecodeError as json_err: await logger.aexception("Response parsing failed") - response_data = {"error": f"Response parsing failed: {json_err}", "raw_response": response.text} + raw_text = response.text if "response" in locals() and response is not None else None + response_data = {"error": f"Response parsing failed: {json_err}", "raw_response": raw_text}Also applies to: 102-114
11-17: Replace absolute SSE base-path in mcp.py
In src/backend/base/langflow/api/v1/mcp.py (line 66), changesse = SseServerTransport("/api/v1/mcp/")to
sse = SseServerTransport("./")so the transport uses the relative base-path as in mcp_projects.
src/backend/base/langflow/components/firecrawl/firecrawl_extract_api.py (1)
111-115: Possible AttributeError: optional flags referenced but inputs are commented outignore_sitemap, include_subdomains, and show_sources are used in params but the inputs are commented out, so these attributes may not exist.
Apply:
- "ignoreSitemap": self.ignore_sitemap, - "includeSubdomains": self.include_subdomains, - "showSources": self.show_sources, + "ignoreSitemap": getattr(self, "ignore_sitemap", False), + "includeSubdomains": getattr(self, "include_subdomains", False), + "showSources": getattr(self, "show_sources", False),src/backend/base/langflow/components/data/csv_to_data.py (1)
63-69: Harden path handling for csv_path to avoid unsafe filesystem access.Use
resolve_pathfor consistency withcsv_fileand to mitigate path traversal/unsafe reads.- elif self.csv_path: - file_path = Path(self.csv_path) + elif self.csv_path: + resolved_path = self.resolve_path(self.csv_path) + file_path = Path(resolved_path) if file_path.suffix.lower() != ".csv": self.status = "The provided file must be a CSV file." else: with file_path.open(newline="", encoding="utf-8") as csvfile: csv_data = csvfile.read()src/backend/base/langflow/components/processing/filter_data_values.py (1)
11-13: Fix typo in user-facing description"comparision" → "comparison".
- " and comparison operator. Check advanced options to select match comparision." + " and comparison operator. Check advanced options to select match comparison."src/backend/base/langflow/components/google/google_generative_ai_embeddings.py (1)
44-47: Constructor calls wrong super — base init not invokedsuper(GoogleGenerativeAIEmbeddings, self).init skips the immediate parent and likely hits object.init, breaking config. Use zero-arg super().
Apply:
class HotaGoogleGenerativeAIEmbeddings(GoogleGenerativeAIEmbeddings): def __init__(self, *args, **kwargs) -> None: - super(GoogleGenerativeAIEmbeddings, self).__init__(*args, **kwargs) + super().__init__(*args, **kwargs)src/backend/base/langflow/components/logic/flow_tool.py (1)
46-50: Bug: missing await yields coroutine in UI options.get_flow_names is async; update_build_config assigns a coroutine to options.
Apply:
async def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None): if field_name == "flow_name": - build_config["flow_name"]["options"] = self.get_flow_names() + build_config["flow_name"]["options"] = await self.get_flow_names() return build_configsrc/backend/base/langflow/components/redis/redis_chat.py (1)
34-43: Bug: URL uses unescaped credentials and ignores quoted password.You URL-encode password but then use self.password in the URL, and username isn’t encoded. Also include auth only when present.
Apply:
def build_message_history(self) -> Memory: kwargs = {} password: str | None = self.password if self.key_prefix: kwargs["key_prefix"] = self.key_prefix - if password: - password = parse.quote_plus(password) - - url = f"redis://{self.username}:{self.password}@{self.host}:{self.port}/{self.database}" + user = self.username or "" + user_enc = parse.quote_plus(user) if user else "" + pwd_enc = parse.quote_plus(password) if password else "" + auth = f"{user_enc}:{pwd_enc}@" if (user_enc or pwd_enc) else "" + url = f"redis://{auth}{self.host}:{self.port}/{self.database}" return RedisChatMessageHistory(session_id=self.session_id, url=url, **kwargs)src/backend/base/langflow/components/tools/calculator.py (1)
45-63: Fix: support ast.Constant for numbers (Python 3.8+).Leaf numeric nodes are ast.Constant; current logic raises Unsupported operation.
Apply:
def _eval_expr(self, node): - if isinstance(node, ast.Num): - return node.n + if isinstance(node, ast.Num): # < Py3.8 + return node.n + if isinstance(node, ast.Constant): # Py3.8+ + if isinstance(node.value, (int, float)): + return node.value + msg = f"Unsupported constant type: {type(node.value).__name__}" + raise TypeError(msg)Optionally, consider bounding exponent sizes to avoid pathological inputs.
src/backend/base/langflow/components/datastax/astra_vectorize.py (1)
75-81: Duplicate input name 'authentication' will collide and break value bindingTwo DictInput entries share the same name, leading to unpredictable template behavior and lost values.
Apply:
DictInput( - name="authentication", - display_name="Authentication parameters", - is_list=True, - advanced=True, - ), + name="authentication", + display_name="Authentication parameters", + is_list=True, + advanced=True, + ), @@ - DictInput( - name="authentication", - display_name="Authentication Parameters", - is_list=True, - advanced=True, - ), + # Removed duplicate 'authentication' inputAlso applies to: 90-96
src/backend/base/langflow/components/data/file.py (1)
292-300: Map OCR engine UI value to None correctly before subprocessEnsures "None" disables OCR cleanly and avoids unnecessary factory imports.
- args: dict[str, Any] = { + # Normalize OCR engine value from UI ("None"/"", None => None) + _ocr = None + if self.ocr_engine is not None: + _s = str(self.ocr_engine).strip().lower() + if _s not in {"", "none", "null"}: + _ocr = str(self.ocr_engine) + args: dict[str, Any] = { "file_path": file_path, "markdown": bool(self.markdown), "image_mode": str(self.IMAGE_MODE), "md_image_placeholder": str(self.md_image_placeholder), "md_page_break_placeholder": str(self.md_page_break_placeholder), "pipeline": str(self.pipeline), - "ocr_engine": str(self.ocr_engine) if self.ocr_engine and self.ocr_engine is not None else None, + "ocr_engine": _ocr, }src/backend/base/langflow/api/v1/mcp_projects.py (1)
973-998: Server removal by SSE URL: fix matching logic (composer).See earlier refactor suggestion to match on containment, not last arg.
- if args and args[-1] == sse_url: + if args and sse_url in args: servers_to_remove.append(server_name)
🧹 Nitpick comments (55)
src/backend/base/langflow/components/huggingface/huggingface_inference_api.py (3)
25-28: Use brand-consistent spacing: “Hugging Face API Key”.Label currently reads “HuggingFace API Key”. Prefer “Hugging Face” (with space) to match brand and other display names in this file.
- display_name="HuggingFace API Key", + display_name="Hugging Face API Key",
88-98: Heuristic misclassifies non-HF remote endpoints as “local”, skipping API key.The
is_local_urlcheck treats any URL not containing “huggingface.co” as local, allowing no API key for arbitrary remote endpoints. This can lead to unexpected auth failures or unintended unauthenticated calls.- is_local_url = ( - api_url.startswith(("http://localhost", "http://127.0.0.1", "http://0.0.0.0", "http://docker")) - or "huggingface.co" not in api_url.lower() - ) + is_local_url = api_url.startswith( + ("http://localhost", "http://127.0.0.1", "http://0.0.0.0") + )Optionally, add an explicit allowlist/env flag for trusted local gateways instead of string contains checks.
80-84: Type hint mismatch for api_key.
create_huggingface_embeddingsannotatesapi_key: SecretStr, but callers pass a plainstrwhen an API key is provided. Harmonize the type tostr(or consistently useSecretStr) to avoid confusion.- def create_huggingface_embeddings( - self, api_key: SecretStr, api_url: str, model_name: str + def create_huggingface_embeddings( + self, api_key: str, api_url: str, model_name: str ) -> HuggingFaceInferenceAPIEmbeddings:src/backend/base/langflow/components/huggingface/huggingface.py (1)
107-109: Fix spacing and wording: “Hugging Face Hub API Token”.Current label “HuggingFace HubAPI Token” has missing space and awkward “HubAPI”. Recommend the clearer, brand-consistent label below.
- SecretStrInput( - name="huggingfacehub_api_token", display_name="HuggingFace HubAPI Token", password=True, required=True - ), + SecretStrInput( + name="huggingfacehub_api_token", + display_name="Hugging Face Hub API Token", + password=True, + required=True, + ),If you want to align with HF docs even more closely, consider “Hugging Face Access Token”.
src/backend/base/langflow/components/agentql/agentql_api.py (3)
126-147: Handle network errors/timeouts from httpxOnly HTTPStatusError is caught. Connection errors, DNS failures, and timeouts will surface as uncaught RequestError, yielding noisy traces to users.
Apply:
try: response = httpx.post(endpoint, headers=headers, json=payload, timeout=self.timeout) response.raise_for_status() - json = response.json() - data = Data(result=json["data"], metadata=json["metadata"]) + resp_json = response.json() + try: + data = Data(result=resp_json["data"], metadata=resp_json.get("metadata", {})) + except (KeyError, TypeError): + self.status = f"Malformed response from AgentQL: {resp_json!r}" + logger.error(self.status) + raise ValueError(self.status) except httpx.HTTPStatusError as e: response = e.response if response.status_code == httpx.codes.UNAUTHORIZED: self.status = "Please, provide a valid API Key. You can create one at https://dev.agentql.com." else: try: - error_json = response.json() + error_json = response.json() logger.error( f"Failure response: '{response.status_code} {response.reason_phrase}' with body: {error_json}" ) - msg = error_json["error_info"] if "error_info" in error_json else error_json["detail"] + msg = error_json.get("error_info") or error_json.get("detail") or str(error_json) except (ValueError, TypeError): msg = f"HTTP {e}." self.status = msg raise ValueError(self.status) from e + except httpx.RequestError as e: + self.status = f"Network error: {e.__class__.__name__}: {e}" + logger.error(self.status) + raise ValueError(self.status) from e
54-59: Constrain timeout input to sane boundsPrevent extreme values via RangeSpec to avoid very long hangs or zero/negative timeouts.
IntInput( name="timeout", display_name="Timeout", info="Seconds to wait for a request.", value=900, - advanced=True, + range_spec=RangeSpec(min=1, max=1800, step_type="int"), + advanced=True, ),
130-132: Nit: avoid shadowing “json” identifierUse a more descriptive name for the parsed body.
- json = response.json() - data = Data(result=json["data"], metadata=json["metadata"]) + resp_json = response.json() + data = Data(result=resp_json["data"], metadata=resp_json.get("metadata", {}))src/backend/base/langflow/components/aiml/aiml.py (3)
47-49: Docstring/default mismatch: base URL text vs. code default.Info text says default is https://api.aimlapi.com, but code defaults to https://api.aimlapi.com/v2. Align them.
- StrInput( + StrInput( name="aiml_api_base", display_name="AI/ML API Base", advanced=True, - info="The base URL of the API. Defaults to https://api.aimlapi.com . " - "You can change this to use other APIs like JinaChat, LocalAI and Prem.", + info=( + "The base URL of the API. Defaults to https://api.aimlapi.com/v2. " + "Change this to target compatible providers (e.g., JinaChat, LocalAI, Prem)." + ), ), @@ - aiml_api_base = self.aiml_api_base or "https://api.aimlapi.com/v2" + aiml_api_base = self.aiml_api_base or "https://api.aimlapi.com/v2"Also applies to: 80-81
84-88: Make the “o1” model check more precise.Substring match can misfire; prefer boundary-aware check.
- if "o1" in model_name: + # Match 'o1' as a token or hyphen-delimited segment (e.g., 'o1', 'o1-mini', 'o1-preview') + import re + if re.search(r'(^|[-_:])o1($|[-_:])', model_name): temperature = 1
98-115: Harden exception handling for OpenAI v1+ and annotate return type.openai.error.* is legacy; new SDK exposes exceptions at top-level. Also return type is Optional.
- def _get_exception_message(self, e: Exception): + def _get_exception_message(self, e: Exception) -> str | None: """Get a message from an OpenAI exception. @@ - try: - from openai.error import BadRequestError - except ImportError: - return None - if isinstance(e, BadRequestError): - message = e.json_body.get("error", {}).get("message", "") - if message: - return message + try: + # OpenAI <=0.x + from openai.error import BadRequestError as _LegacyBadRequestError # type: ignore + except Exception: + _LegacyBadRequestError = None + try: + # OpenAI >=1.x + from openai import BadRequestError as _BadRequestError # type: ignore + except Exception: + _BadRequestError = None + + if (_LegacyBadRequestError and isinstance(e, _LegacyBadRequestError)) or ( + _BadRequestError and isinstance(e, _BadRequestError) + ): + # Safely extract message across versions + message = getattr(e, "message", None) + if not message: + message = getattr(getattr(e, "json_body", {}), "get", lambda *_: None)("error", {}).get("message", "") + return message or None return Nonedocs/docs/Integrations/Cleanlab/eval_and_remediate_cleanlab.json (1)
895-895: Default OpenAI model set to gpt-4.1 — consider 4.1-mini for cost/latency.If this flow targets quick eval cycles, 4.1-mini can reduce latency/cost without major quality loss. Up to you.
- "value": "gpt-4.1" + "value": "gpt-4.1-mini"src/backend/base/langflow/base/agents/events.py (4)
37-37: Nit: return the string directly (drop redundant f-string).No interpolation needed.
Apply:
- return f"{final_input}" + return final_inputAlso, confirm no tests/UI logic assert on the previous "Input:" prefix.
88-121: Deduplicate _extract_output_text logic (two identical dict branches).Simplifies control flow and avoids repeated checks.
Apply:
@@ -def _extract_output_text(output: str | list) -> str: - if isinstance(output, str): - return output - if isinstance(output, list) and len(output) == 0: - return "" - if not isinstance(output, list) or len(output) != 1: - msg = f"Output is not a string or list of dictionaries with 'text' key: {output}" - raise TypeError(msg) - - item = output[0] - if isinstance(item, str): - return item - if isinstance(item, dict): - if "text" in item: - return item["text"] - # If the item's type is "tool_use", return an empty string. - # This likely indicates that "tool_use" outputs are not meant to be displayed as text. - if item.get("type") == "tool_use": - return "" - if isinstance(item, dict): - if "text" in item: - return item["text"] - # If the item's type is "tool_use", return an empty string. - # This likely indicates that "tool_use" outputs are not meant to be displayed as text. - if item.get("type") == "tool_use": - return "" - # This is a workaround to deal with function calling by Anthropic - # since the same data comes in the tool_output we don't need to stream it here - # although it would be nice to - if "partial_json" in item: - return "" - msg = f"Output is not a string or list of dictionaries with 'text' key: {output}" - raise TypeError(msg) +def _extract_output_text(output: str | list) -> str: + if isinstance(output, str): + return output + if isinstance(output, list) and len(output) == 0: + return "" + if not isinstance(output, list) or len(output) != 1: + msg = f"Output is not a string or list of dictionaries with 'text' key: {output}" + raise TypeError(msg) + + item = output[0] + if isinstance(item, str): + return item + if isinstance(item, dict): + if "text" in item: + return item["text"] + if item.get("type") == "tool_use": + return "" + if "partial_json" in item: + return "" + msg = f"Output is not a string or list of dictionaries with 'text' key: {output}" + raise TypeError(msg)
279-287: Protocol type mismatch: tool_blocks_map should hold ToolContent, not ContentBlock.Aligns with handler implementations and usage.
Apply:
class ToolEventHandler(Protocol): async def __call__( self, event: dict[str, Any], agent_message: Message, - tool_blocks_map: dict[str, ContentBlock], + tool_blocks_map: dict[str, ToolContent], send_message_method: SendMessageFunctionType, start_time: float, ) -> tuple[Message, float]: ...
199-229: Persist updated tool output immediately.After updating the ToolContent, send the message so the UI reflects the tool result without waiting for later events.
Apply:
if updated_tool_content: updated_tool_content.duration = duration updated_tool_content.header = {"title": f"Executed **{updated_tool_content.name}**", "icon": "Hammer"} updated_tool_content.output = event["data"].get("output") # Update the map reference tool_blocks_map[tool_key] = updated_tool_content - - return agent_message, new_start_time + # Persist changes + agent_message = await send_message_method(message=agent_message) + new_start_time = perf_counter() + + return agent_message, new_start_timesrc/backend/base/langflow/components/docling/docling_remote.py (2)
139-146: Retry loop should back off and cap attempts correctly
- Off-by-one on retry threshold (> allows 6 attempts when MAX_500_RETRIES=5).
- Add exponential backoff to avoid hammering the server.
Apply:
- if retry_status_start <= response.status_code < retry_status_end: + if retry_status_start <= response.status_code < retry_status_end: http_failures += 1 - if http_failures > self.MAX_500_RETRIES: + if http_failures >= self.MAX_500_RETRIES: self.log(f"The status requests got a http response {response.status_code} too many times.") return None + # simple exponential backoff with cap + time.sleep(min(2 ** http_failures, 30)) continue
173-175: Consider explicit HTTP timeoutsRelying on httpx defaults can cause long hangs. Set a client-level timeout.
Apply:
- httpx.Client(headers=self.api_headers) as client, + httpx.Client(headers=self.api_headers, timeout=httpx.Timeout(30.0)) as client,src/backend/base/langflow/base/data/docling_utils.py (2)
13-20: Good: dedicated exception for dependency issuesClear message and fields. Consider exporting in module all if you plan to import it elsewhere.
214-242: Include actionable install_command in dependency_error payloaddocling_inline tries to read install_command; currently it's missing, falling back to a generic message. Provide simple per-dependency commands.
Apply:
- except (OSError, ValueError, RuntimeError) as file_error: + except (OSError, ValueError, RuntimeError) as file_error: error_msg = str(file_error) @@ - if dependency_name: - queue.put( - { - "error": error_msg, - "error_type": "dependency_error", - "dependency_name": dependency_name, - "original_exception": type(file_error).__name__, - } - ) + if dependency_name: + install_commands = { + "ocrmac": "uv pip install ocrmac", + "easyocr": "uv pip install easyocr", + "tesserocr": "uv pip install tesserocr", + "rapidocr": "uv pip install rapidocr", + } + queue.put( + { + "error": error_msg, + "error_type": "dependency_error", + "dependency_name": dependency_name, + "install_command": install_commands.get(dependency_name, "uv pip install 'langflow[docling]'"), + "original_exception": type(file_error).__name__, + } + ) returnsrc/backend/base/langflow/base/models/anthropic_constants.py (1)
38-40: Do you want to hide deprecated models from TOOL_CALLING_SUPPORTED_ANTHROPIC_MODELS too?If this list feeds any UI or user-facing selectors, consider excluding deprecated models here as well.
Apply if desired:
-TOOL_CALLING_SUPPORTED_ANTHROPIC_MODELS = [ - metadata["name"] for metadata in ANTHROPIC_MODELS_DETAILED if metadata.get("tool_calling", False) -] +TOOL_CALLING_SUPPORTED_ANTHROPIC_MODELS = [ + metadata["name"] + for metadata in ANTHROPIC_MODELS_DETAILED + if metadata.get("tool_calling", False) and not metadata.get("deprecated", False) +]src/backend/base/langflow/components/input_output/chat_output.py (2)
165-170: Avoid mutating component state in convert_to_string; use a local flag and apply consistently.This writes self.clean_data at runtime and only honors it for list inputs. Prefer a local variable and pass it to safe_convert for both list and single values.
def convert_to_string(self) -> str | Generator[Any, None, None]: """Convert input data to string with proper error handling.""" self._validate_input() - if isinstance(self.input_value, list): - self.clean_data = self.clean_data if hasattr(self, "clean_data") else False - return "\n".join([safe_convert(item, clean_data=self.clean_data) for item in self.input_value]) + clean = getattr(self, "clean_data", False) + if isinstance(self.input_value, list): + return "\n".join([safe_convert(item, clean_data=clean) for item in self.input_value]) if isinstance(self.input_value, Generator): return self.input_value - return safe_convert(self.input_value) + return safe_convert(self.input_value, clean_data=clean)
103-104: Unused variable from get_properties...; mark as intentionally unused.icon isn’t used anymore; bind it to _icon to signal intent and avoid future lint warnings.
- source, icon, display_name, source_id = self.get_properties_from_source_component() + source, _icon, display_name, source_id = self.get_properties_from_source_component()src/backend/base/langflow/components/firecrawl/firecrawl_map_api.py (1)
21-21: Secret label consistency — LGTM; minor copy nitLabel change looks good. Small grammar tweak in the help text for polish.
- info="The API key to use Firecrawl API.", + info="The API key to use the Firecrawl API.",Also applies to: 24-25
pyproject.toml (1)
35-36: Duplicate/conflicting constraints for certifiYou have both a ranged and a pinned spec; keep just one to avoid confusion in resolvers.
- "certifi>=2023.11.17,<2025.0.0", - "certifi==2024.8.30", + "certifi==2024.8.30",src/backend/base/langflow/components/ibm/watsonx_embeddings.py (1)
110-113: Don’t overwrite an existing model selection on refreshCurrent logic resets model_name.value to the first option if a value already exists; usually we only set a default when it’s empty.
Apply:
- if build_config.model_name.value: - build_config.model_name.value = models[0] + if not build_config.model_name.value: + build_config.model_name.value = models[0]src/backend/base/langflow/components/ibm/watsonx.py (1)
162-166: Preserve existing model selection when updating optionsAvoid resetting the user’s chosen model when refreshing the list.
Apply:
- if build_config.model_name.value: - build_config.model_name.value = models[0] + if not build_config.model_name.value: + build_config.model_name.value = models[0]src/backend/base/langflow/components/clickhouse/clickhouse.py (1)
29-30: Brand capitalization consistencyUse “ClickHouse Password” to match “ClickHouse” used elsewhere in this file.
Apply:
- SecretStrInput(name="password", display_name="Clickhouse Password", required=True), + SecretStrInput(name="password", display_name="ClickHouse Password", required=True),src/backend/base/langflow/components/datastax/astra_db.py (1)
35-36: Nit: adjust copy to match chat memory domain (not vectors).Replace “where the vectors will be stored” with “where chat messages will be stored” to avoid confusion.
- info="The name of the collection within Astra DB where the vectors will be stored.", + info="The name of the collection within Astra DB where chat messages will be stored.",src/backend/base/langflow/components/processing/select_data.py (1)
41-44: Set status before raising on out-of-range index.Minor UX/observability improvement: mirror other components by updating
self.statuswith the error before raising.- if selected_index < 0 or selected_index >= len(self.data_list): - msg = f"Selected index {selected_index} is out of range." - raise ValueError(msg) + if selected_index < 0 or selected_index >= len(self.data_list): + msg = f"Selected index {selected_index} is out of range." + self.status = msg + raise ValueError(msg)src/backend/base/langflow/components/processing/extract_key.py (1)
13-13: Typo in public component name "ExtractaKey"Public identifier likely intended as
ExtractKey. This impacts UX and any programmatic references.- name = "ExtractaKey" + name = "ExtractKey"src/backend/base/langflow/components/confluence/confluence.py (1)
31-35: Label tweak LGTM; minor copy nit for accuracy“Confluence API Key” label looks good. Consider updating the info text to Atlassian’s terminology (“API token”) for accuracy.
Apply this diff if you agree:
SecretStrInput( name="api_key", - display_name="Confluence API Key", + display_name="Confluence API Key", required=True, - info="Atlassian Key. Create at: https://id.atlassian.com/manage-profile/security/api-tokens", + info="Atlassian API token. Create at: https://id.atlassian.com/manage-profile/security/api-tokens", ),src/backend/base/langflow/components/embeddings/similarity.py (1)
15-15: Replacement metadata addition looks good; consider hardening similarity calcMapping to ["datastax.AstraDB"] is fine. While here, consider guarding cosine against zero-norm vectors and adding a fallback for unknown metrics.
Apply this diff to harden compute_similarity:
@@ - if similarity_metric == "Cosine Similarity": - score = np.dot(embedding_1, embedding_2) / (np.linalg.norm(embedding_1) * np.linalg.norm(embedding_2)) + if similarity_metric == "Cosine Similarity": + denom = (np.linalg.norm(embedding_1) * np.linalg.norm(embedding_2)) + if denom == 0: + similarity_score = {"error": "Cannot compute cosine similarity with zero-norm vector."} + else: + score = float(np.dot(embedding_1, embedding_2) / denom) + similarity_score = {"cosine_similarity": score} - similarity_score = {"cosine_similarity": score} @@ - elif similarity_metric == "Euclidean Distance": - score = np.linalg.norm(embedding_1 - embedding_2) + elif similarity_metric == "Euclidean Distance": + score = float(np.linalg.norm(embedding_1 - embedding_2)) similarity_score = {"euclidean_distance": score} @@ - elif similarity_metric == "Manhattan Distance": - score = np.sum(np.abs(embedding_1 - embedding_2)) + elif similarity_metric == "Manhattan Distance": + score = float(np.sum(np.abs(embedding_1 - embedding_2))) similarity_score = {"manhattan_distance": score} + else: + raise ValueError(f"Unknown similarity metric: {similarity_metric}")src/backend/base/langflow/components/data/url.py (1)
215-215: Also filter empty/whitespace header values and empty keysFiltering only None still allows "", " " and empty keys, which some servers/proxies reject. Tighten the filter and strip keys.
Apply:
- headers_dict = {header["key"]: header["value"] for header in self.headers if header["value"] is not None} + headers_dict = { + h["key"].strip(): h["value"] + for h in (self.headers or []) + if isinstance(h, dict) + and h.get("key") + and h["key"].strip() + and h.get("value") is not None + and (not isinstance(h["value"], str) or h["value"].strip()) + }src/backend/base/langflow/components/google/google_generative_ai_embeddings.py (1)
73-76: Nit: avoid .format on a constant without placeholdersMAX_DIMENSION_ERROR has no
{}; .format(...) is a no-op. Either add the dimension to the message or drop .format.Example:
- error_msg = MAX_DIMENSION_ERROR.format(output_dimensionality) + error_msg = f"Output dimensionality cannot exceed {MAX_DIMENSION}. Google's embedding models only support dimensions up to {MAX_DIMENSION}."scripts/windows/build_and_run.ps1 (1)
90-90: Correct uv flag placement — LGTM; quote path for spacesSwitch to
uv run --env-file …is correct. Quote $envPath to handle spaces.- & uv run --env-file $envPath langflow run + & uv run --env-file "$envPath" langflow runsrc/backend/base/langflow/components/processing/parse_json_data.py (1)
87-91: Nit: logger usage should format message, not pass multiple args.Current calls won’t render objects as expected with standard logging.
Apply:
- logger.info("to_filter: ", to_filter) + logger.info("to_filter: %s", to_filter) ... - logger.info("results: ", results) + logger.info("results: %s", results)src/backend/base/langflow/components/datastax/__init__.py (1)
22-33: Optional: keep dynamic import map alphabetizedFor maintainability, consider alphabetizing
_dynamic_importskeys (and matching__all__) per components init guideline.src/backend/base/langflow/api/build.py (1)
342-347: Nit: avoid shadowing timedeltaVariable
timedeltashadows the type name; rename toelapsedfor clarity.- timedelta = time.perf_counter() - start_time - duration = format_elapsed_time(timedelta) - result_data_response.duration = duration - result_data_response.timedelta = timedelta - vertex.add_build_time(timedelta) + elapsed = time.perf_counter() - start_time + duration = format_elapsed_time(elapsed) + result_data_response.duration = duration + result_data_response.timedelta = elapsed + vertex.add_build_time(elapsed)src/backend/base/langflow/components/data/api_request.py (1)
217-230: Early return from _process_list_body can drop subsequent itemsReturning the first unwrapped dict short-circuits processing and ignores the rest of the list; also skips JSON value parsing. Prefer merging dicts or only short-circuiting when the list has a single item.
Consider:
- if isinstance(unwrapped_data, dict) and not self._is_valid_key_value_item(unwrapped_data): - return unwrapped_data - current_item = unwrapped_data + if isinstance(unwrapped_data, dict) and not self._is_valid_key_value_item(unwrapped_data): + # If list holds exactly one dict, use it as-is; otherwise merge it + if len(body) == 1: + return self._process_dict_body(unwrapped_data) + processed_dict.update(self._process_dict_body(unwrapped_data)) + continue + current_item = unwrapped_datasrc/backend/base/langflow/components/processing/structured_output.py (1)
200-206: DataFrame construction: LGTM; drop unreachable fallbackThe last return is unreachable after the preceding branches.
- if len(output) == 1: + if len(output) == 1: # For single dictionary, wrap in a list to create DataFrame with one row return DataFrame([output[0]]) if len(output) > 1: # Multiple outputs - convert to DataFrame directly return DataFrame(output) - return DataFrame()src/backend/base/langflow/api/v1/auth_helpers.py (1)
56-61: Consider extracting masked value constant.The hardcoded string
"*******"for masked secrets appears multiple times in the codebase. Consider defining it as a constant for better maintainability.Add a constant at the module level:
+MASKED_SECRET_VALUE = "*******" + def handle_auth_settings_update(Then use it:
- if field in auth_dict and auth_dict[field] == "*******" and field in decrypted_current: + if field in auth_dict and auth_dict[field] == MASKED_SECRET_VALUE and field in decrypted_current:src/backend/base/langflow/components/knowledge_bases/retrieval.py (1)
223-237: Optimize embedding retrieval query.When
include_embeddingsis false, the code still fetches embeddings from the database at line 232 but doesn't use them. Consider conditionally including embeddings in the query.- embeddings_result = collection.get(where={"_id": {"$in": doc_ids}}, include=["metadatas", "embeddings"]) + include_fields = ["metadatas"] + if self.include_embeddings: + include_fields.append("embeddings") + embeddings_result = collection.get(where={"_id": {"$in": doc_ids}}, include=include_fields)src/backend/base/langflow/components/knowledge_bases/ingestion.py (2)
539-540: Consider adding error handling for empty input.The code handles list input but doesn't validate if the input is empty or None before conversion.
+ if not self.input_df: + msg = "Input data is required for knowledge ingestion" + raise ValueError(msg) input_value = self.input_df[0] if isinstance(self.input_df, list) else self.input_df df_source: DataFrame = convert_to_dataframe(input_value)
594-596: Improve error message with more context.The generic error message could be more helpful by including the knowledge base name.
- msg = f"Error during KB ingestion: {e}" + msg = f"Error during ingestion for knowledge base '{self.knowledge_base}': {e}" raise RuntimeError(msg) from esrc/backend/base/langflow/components/knowledge_bases/__init__.py (1)
20-20: Docstring nit: clarify scope.Consider “Lazily import knowledge base components on attribute access.” for precision.
- """Lazily import input/output components on attribute access.""" + """Lazily import knowledge base components on attribute access."""src/backend/base/langflow/components/agents/agent.py (2)
258-273: Make JSON extraction regex non-greedy to avoid over-capturing.Greedy r"{.*}" can swallow multiple JSON blocks or trailing braces.
- json_pattern = r"\{.*\}" + json_pattern = r"\{.*?\}"
527-554: Custom “connect_other_models” branch: confirm input_types and placeholder.
- input_types=["LanguageModel"] on a DropdownInput is atypical; if the UI treats this as a handle, consider keeping input_types=[] to avoid confusion.
- Placeholder is helpful.
- input_types=["LanguageModel"], + input_types=[],src/backend/base/langflow/api/v1/projects.py (3)
80-90: Auto-enable API key when AUTO_LOGIN=false: good default.Solid safety default. Minor:
new_project.idis None before commit; logging that ID is harmless but can be confusing.- await logger.adebug( - f"Auto-enabled API key authentication for project {new_project.name} " - f"({new_project.id}) due to AUTO_LOGIN=false" - ) + await logger.adebug( + f"Auto-enabled API key authentication for project {new_project.name} due to AUTO_LOGIN=false" + )
247-267: Guard composer actions by global setting.Add a check to skip scheduling/stop when MCP Composer is disabled.
- if should_start_mcp_composer: + if should_start_mcp_composer and get_settings_service().settings.mcp_composer_enabled: ... - elif should_stop_mcp_composer: + elif should_stop_mcp_composer and get_settings_service().settings.mcp_composer_enabled: ...
272-273: Avoid extra DB churn when moving flows/components.Currently
excluded_flowsignores components and moves them out, then moves them back. Compute exclusion against both sets to reduce two-step writes.- excluded_flows = list(set(flows_ids) - set(project.flows)) + excluded_flows = list(set(flows_ids) - set(project.flows) - set(project.components))src/backend/base/langflow/api/v1/mcp_projects.py (5)
421-483: Composer orchestration response: OK; ensure non-Composer fallback works.Flow is solid. Recommend including
uses_composerin all branches for consistency.- response: dict[str, Any] = { - "message": f"Updated MCP settings for {len(updated_flows)} flows and project auth settings" - } + response: dict[str, Any] = { + "message": f"Updated MCP settings for {len(updated_flows)} flows and project auth settings", + "uses_composer": should_use_mcp_composer(project), + }
795-803: Supported clients list: fine.Optionally derive from a constant to avoid drift across endpoints.
852-852: Consistency note with remover.
config_contains_sse_urluses containment; alignremove_server_by_sse_urlaccordingly (see above).
863-868: Direct SSE URL builder: consider basePath support.Behind reverse proxies, installed configs may require a base path prefix. Consider adding an optional settings-based
base_pathand prepend it here (and document it).- project_sse_url = f"http://{host}:{port}/api/v1/mcp/project/{project_id}/sse" + base_path = getattr(settings_service.settings, "base_path", "") or "" + if base_path and not base_path.startswith("/"): + base_path = f"/{base_path}" + project_sse_url = f"http://{host}:{port}{base_path}/api/v1/mcp/project/{project_id}/sse"
1000-1023: Docstring mismatch: exception type.Docstring says HTTPException, code raises ValueError. Align docs or raise HTTPException.
- Raises: - HTTPException: If MCP Composer is not enabled or auth config is missing + Raises: + ValueError: If auth config is missing
| # Default: false | ||
| LANGFLOW_MCP_COMPOSER_ENABLED= | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
ENV var rename: ensure code and docs are fully migrated; consider a compatibility alias
LANGFLOW_FEATURE_MCP_COMPOSER → LANGFLOW_MCP_COMPOSER_ENABLED looks good. Please verify no remaining references to the old variable exist and that loaders optionally accept the old name for one release cycle to avoid breaking deployments.
Also, since the PR goal is to fix MCP SSE behind reverse proxies, validate that the SSE transport path was switched to a relative base ("./"):
🏁 Script executed:
#!/bin/bash
# Find old/new env var references
rg -nP --glob '!**/node_modules/**' 'LANGFLOW_FEATURE_MCP_COMPOSER|LANGFLOW_MCP_COMPOSER_ENABLED'Length of output: 148
🏁 Script executed:
#!/bin/bash
rg -nP --glob '!**/node_modules/**' 'SseServerTransport\s*\(' -C2Length of output: 148
🏁 Script executed:
#!/bin/bash
# Search all files (including those normally ignored) for old and new env var references
rg -nP --no-ignore 'LANGFLOW_FEATURE_MCP_COMPOSER\b|LANGFLOW_MCP_COMPOSER_ENABLED\b'
# Verify SSE transport path usage
rg -nP --no-ignore 'SseServerTransport\s*\(' -C2Length of output: 1042
Add backwards-compatibility alias and unify SSE transport path
- No occurrences of
LANGFLOW_FEATURE_MCP_COMPOSERremain; modify the env loader to fall back to the old name for one release cycle (e.g.env.LANGFLOW_MCP_COMPOSER_ENABLED ?? env.LANGFLOW_FEATURE_MCP_COMPOSER). - In
src/backend/base/langflow/api/v1/mcp.py(line 66), changeSseServerTransport("/api/v1/mcp/")to use a relative base"./"to ensure SSE works behind reverse proxies.
| *.mcp.json | ||
|
|
||
| member_servers.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Ignore member_servers.json — good; consider dropping the no-op change
- Adding
member_servers.jsonis appropriate for ignoring generated MCP member server state. - The
*.mcp.jsonline change is a no-op; consider reverting to avoid diff noise.
Separately, since the PR title mentions the MCP SSE path fix, please confirm the actual change from absolute to relative path exists:
🏁 Script executed:
#!/bin/bash
# Locate SseServerTransport invocations and ensure no absolute /api paths remain
rg -nP -C2 '\bSseServerTransport\s*\('
rg -nP '\bSseServerTransport\s*\(\s*[\'"]\/api\/' || echo "No absolute /api paths found in SseServerTransport args."
# Spot the project transport assignment (expected to be "./")
fd mcp_projects.py -a
rg -nP -C2 '\bproject_sse_transports\[[^\]]+\]\s*=\s*SseServerTransport\(' $(fd mcp_projects.py -a 2>/dev/null)Length of output: 811
Revert no-op *.mcp.json, keep member_servers.json, and convert remaining absolute SSE path to relative
- .gitignore: addition of
member_servers.jsonis correct; remove the unchanged*.mcp.jsonentry to avoid diff noise. - In src/backend/base/langflow/api/v1/mcp.py (line 66), change
sse = SseServerTransport("/api/v1/mcp/")
to use a relative path (e.g."./"), matching the assignment in mcp_projects.py.
🤖 Prompt for AI Agents
.gitignore lines 280-282 and src/backend/base/langflow/api/v1/mcp.py line 66:
remove the no-op glob entry "*.mcp.json" from .gitignore while keeping the new
member_servers.json entry to avoid noisy diffs, and in
src/backend/base/langflow/api/v1/mcp.py change the SseServerTransport path from
an absolute "/api/v1/mcp/" to a relative path (e.g. "./" or the same relative
value used in mcp_projects.py) so the SSE transport uses a relative endpoint
consistent with the project.
| --- | ||
| title: Enterprise support for Langflow | ||
| title: IBM Elite Support for Langflow | ||
| slug: /luna-for-langflow | ||
| --- | ||
|
|
||
| With **Luna for Langflow** support, you can develop and deploy Langflow applications with confidence. | ||
| IBM Elite Support offers enterprise support for Langflow to help you develop and deploy Langflow applications with confidence. | ||
|
|
||
| Luna is a subscription to the Langflow expertise at DataStax. It's meant for Langflow users who want all the benefits of running their own open-source deployments, as well as the peace of mind that comes with having direct access to the team that has authored the majority of the Langflow code. | ||
| IBM Elite Support for Langflow is a subscription to the Langflow expertise at IBM. | ||
| It's meant for Langflow users who want all the benefits of running their own open-source deployments, as well as the peace of mind that comes with having direct access to the team that has authored the majority of the Langflow code. | ||
|
|
||
| Luna subscribers can get help with general-purpose and technical questions for their open-source Langflow deployments. | ||
| If an issue is encountered, DataStax is there to help. | ||
| IBM Elite Support subscribers can get help with general-purpose and technical questions for their open-source Langflow deployments. | ||
| If an issue is encountered, IBM is there to help. | ||
|
|
||
| :::info | ||
| As of May 2025, Luna for Langflow support covers Langflow versions 1.4.x. | ||
|
|
||
| Subscribers must run a supported Python version to receive support. | ||
| Supported versions are `>=3.10, <3.14`, which includes all version from 3.10 through 3.13.x, but not 3.14. | ||
| ::: | ||
|
|
||
| To subscribe or learn more, see [Luna for Langflow](https://www.datastax.com/products/luna-langflow). | ||
| To subscribe or learn more, see [IBM Elite Support for Langflow](https://ibm.com/docs/esfl?topic=elite-support-langflow-specification). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Add required description to front matter; review slug branding
Docs require title and description in front matter. Also confirm keeping slug: /luna-for-langflow is intentional with the IBM branding.
---
-title: IBM Elite Support for Langflow
+title: IBM Elite Support for Langflow
+description: Learn how to get enterprise-grade IBM Elite Support for your open-source Langflow deployments.
slug: /luna-for-langflow
---- If you intend to change the slug (e.g.,
/ibm-elite-support), add redirects and update internal links.
🏁 Script executed:
#!/bin/bash
# Search for existing description fields in MDX front matter
rg -nP '^description:' -g 'docs/**/*.mdx'
# Identify MDX files missing a description field
printf "\nMDX files missing description front matter:\n"
rg -L '^description:' -g 'docs/**/*.mdx'
# List slug values in Support docs for consistency check
rg -nP '^slug:' -g 'docs/docs/Support/*.mdx'Length of output: 454
Add missing description to front matter
Add a description field in docs/docs/Support/luna-for-langflow.mdx:
---
title: IBM Elite Support for Langflow
+description: Learn how to get enterprise-grade IBM Elite Support for your open-source Langflow deployments.
slug: /luna-for-langflow
---The slug /luna-for-langflow is consistent with other Support pages and requires no change.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --- | |
| title: Enterprise support for Langflow | |
| title: IBM Elite Support for Langflow | |
| slug: /luna-for-langflow | |
| --- | |
| With **Luna for Langflow** support, you can develop and deploy Langflow applications with confidence. | |
| IBM Elite Support offers enterprise support for Langflow to help you develop and deploy Langflow applications with confidence. | |
| Luna is a subscription to the Langflow expertise at DataStax. It's meant for Langflow users who want all the benefits of running their own open-source deployments, as well as the peace of mind that comes with having direct access to the team that has authored the majority of the Langflow code. | |
| IBM Elite Support for Langflow is a subscription to the Langflow expertise at IBM. | |
| It's meant for Langflow users who want all the benefits of running their own open-source deployments, as well as the peace of mind that comes with having direct access to the team that has authored the majority of the Langflow code. | |
| Luna subscribers can get help with general-purpose and technical questions for their open-source Langflow deployments. | |
| If an issue is encountered, DataStax is there to help. | |
| IBM Elite Support subscribers can get help with general-purpose and technical questions for their open-source Langflow deployments. | |
| If an issue is encountered, IBM is there to help. | |
| :::info | |
| As of May 2025, Luna for Langflow support covers Langflow versions 1.4.x. | |
| Subscribers must run a supported Python version to receive support. | |
| Supported versions are `>=3.10, <3.14`, which includes all version from 3.10 through 3.13.x, but not 3.14. | |
| ::: | |
| To subscribe or learn more, see [Luna for Langflow](https://www.datastax.com/products/luna-langflow). | |
| \ No newline at end of file | |
| To subscribe or learn more, see [IBM Elite Support for Langflow](https://ibm.com/docs/esfl?topic=elite-support-langflow-specification). | |
| --- | |
| title: IBM Elite Support for Langflow | |
| description: Learn how to get enterprise-grade IBM Elite Support for your open-source Langflow deployments. | |
| slug: /luna-for-langflow | |
| --- |
🤖 Prompt for AI Agents
In docs/docs/Support/luna-for-langflow.mdx around lines 1 to 14, the front
matter is missing a description field; add a description entry to the YAML front
matter (e.g., description: "Short one-line summary of IBM Elite Support for
Langflow") so the page metadata matches other Support pages and improves
SEO/preview text, keeping the slug unchanged.
| # Capture both inactivated and conditionally excluded vertices | ||
| inactivated_vertices = list(graph.inactivated_vertices.union(graph.conditionally_excluded_vertices)) | ||
| graph.reset_inactivated_vertices() | ||
| graph.reset_activated_vertices() | ||
|
|
||
| # Note: Do not reset conditionally_excluded_vertices each iteration | ||
| # This is handled by the ConditionalRouter component | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Do not reactivate inactivated vertices; fix reset semantics upstream
You now surface inactivated_vertices = inactivated ∪ conditionally_excluded, which is good. But graph.reset_inactivated_vertices() (see graph/base.py snippet) currently marks vertices ACTIVE, which can break ConditionalRouter/IF-ELSE branch control. The reset should only clear the tracking set.
Proposed fix in src/backend/base/langflow/graph/graph/base.py:
- def reset_inactivated_vertices(self) -> None:
- """Resets the inactivated vertices in the graph."""
- for vertex_id in self.inactivated_vertices.copy():
- self.mark_vertex(vertex_id, "ACTIVE")
- self.inactivated_vertices = set()
- self.inactivated_vertices = set()
+ def reset_inactivated_vertices(self) -> None:
+ """Clears tracking of inactivated vertices without changing their state."""
+ self.inactivated_vertices = set()To confirm current behavior:
🏁 Script executed:
#!/bin/bash
rg -nP 'def\s+reset_inactivated_vertices\(' -C3 src/backend/base/langflow/graph/graph/base.pyLength of output: 374
🏁 Script executed:
#!/bin/bash
sed -n '893,910p' src/backend/base/langflow/graph/graph/base.pyLength of output: 767
Fix reset_inactivated_vertices to only clear tracking set, not reactivate vertices
In src/backend/base/langflow/graph/graph/base.py, reset_inactivated_vertices currently calls mark_vertex(..., "ACTIVE") for each ID, then clears the set—this reactivation breaks ConditionalRouter/IF-ELSE flow control. Update the method to only clear inactivated_vertices:
- def reset_inactivated_vertices(self) -> None:
- """Resets the inactivated vertices in the graph."""
- for vertex_id in self.inactivated_vertices.copy():
- self.mark_vertex(vertex_id, "ACTIVE")
- self.inactivated_vertices = set()
- self.inactivated_vertices = set()
+ def reset_inactivated_vertices(self) -> None:
+ """Clears tracking of inactivated vertices without changing their state."""
+ self.inactivated_vertices = set()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Capture both inactivated and conditionally excluded vertices | |
| inactivated_vertices = list(graph.inactivated_vertices.union(graph.conditionally_excluded_vertices)) | |
| graph.reset_inactivated_vertices() | |
| graph.reset_activated_vertices() | |
| # Note: Do not reset conditionally_excluded_vertices each iteration | |
| # This is handled by the ConditionalRouter component | |
| def reset_inactivated_vertices(self) -> None: | |
| """Clears tracking of inactivated vertices without changing their state.""" | |
| self.inactivated_vertices = set() |
🤖 Prompt for AI Agents
In src/backend/base/langflow/graph/graph/base.py around lines 347 to 354,
reset_inactivated_vertices currently reactivates vertices by calling
mark_vertex(..., "ACTIVE") for each id before clearing the set; change it to
only clear the inactivated_vertices tracking set (remove the loop that calls
mark_vertex) so vertices are not reactivated; simply reset/clear the
inactivated_vertices container and return, leaving activation state management
to ConditionalRouter/IF-ELSE logic.
| # For non-OAuth (API key or no auth), use mcp-proxy | ||
| sse_url = await get_project_sse_url(project_id) | ||
| command = "uvx" | ||
| args = ["mcp-proxy"] | ||
| # Check if we need to add Langflow API key headers | ||
| # Necessary only when Project API Key Authentication is enabled | ||
|
|
||
| # If we're in WSL and the host is localhost, we might need to adjust the URL | ||
| # so Windows applications can reach the WSL service | ||
| if host in {"localhost", "127.0.0.1"}: | ||
| try: | ||
| # Try to get the WSL IP address for host.docker.internal or similar access | ||
| # Generate a Langflow API key for auto-install if needed | ||
| # Only add API key headers for projects with "apikey" auth type (not "none" or OAuth) | ||
|
|
||
| # This might vary depending on WSL version and configuration | ||
| proc = await create_subprocess_exec( | ||
| "/usr/bin/hostname", | ||
| "-I", | ||
| stdout=asyncio.subprocess.PIPE, | ||
| stderr=asyncio.subprocess.PIPE, | ||
| ) | ||
| stdout, _ = await proc.communicate() | ||
| if should_generate_api_key: | ||
| async with session_scope() as api_key_session: | ||
| api_key_create = ApiKeyCreate(name=f"MCP Server {project.name}") | ||
| api_key_response = await create_api_key(api_key_session, api_key_create, current_user.id) | ||
| langflow_api_key = api_key_response.api_key | ||
| args.extend(["--headers", "x-api-key", langflow_api_key]) | ||
|
|
||
| if proc.returncode == 0 and stdout.strip(): | ||
| wsl_ip = stdout.decode().strip().split()[0] # Get first IP address | ||
| await logger.adebug("Using WSL IP for external access: %s", wsl_ip) | ||
| # Replace the localhost with the WSL IP in the URL | ||
| sse_url = sse_url.replace(f"http://{host}:{port}", f"http://{wsl_ip}:{port}") | ||
| except OSError as e: | ||
| await logger.awarning("Failed to get WSL IP address: %s. Using default URL.", str(e)) | ||
|
|
||
| # Base args | ||
| args = ["mcp-composer"] if FEATURE_FLAGS.mcp_composer else ["mcp-proxy"] | ||
|
|
||
| # Add authentication args based on MCP_COMPOSER feature flag and auth settings | ||
| if not FEATURE_FLAGS.mcp_composer: | ||
| # When MCP_COMPOSER is disabled, only use headers format if API key was generated | ||
| # (when autologin is disabled) | ||
| if generated_api_key: | ||
| args.extend(["--headers", "x-api-key", generated_api_key]) | ||
| elif project.auth_settings: | ||
| # Decrypt sensitive fields before using them | ||
| decrypted_settings = decrypt_auth_settings(project.auth_settings) | ||
| auth_settings = AuthSettings(**decrypted_settings) if decrypted_settings else AuthSettings() | ||
| args.extend(["--auth_type", auth_settings.auth_type]) | ||
|
|
||
| # When MCP_COMPOSER is enabled, only add headers if auth_type is "apikey" | ||
| auth_settings = AuthSettings(**project.auth_settings) | ||
| if auth_settings.auth_type == "apikey" and generated_api_key: | ||
| args.extend(["--headers", "x-api-key", generated_api_key]) | ||
| # If no auth_settings or auth_type is "none", don't add any auth headers | ||
|
|
||
| # Add the SSE URL | ||
| if FEATURE_FLAGS.mcp_composer: | ||
| args.extend(["--sse-url", sse_url]) | ||
| else: | ||
| # Add the SSE URL for mcp-proxy | ||
| args.append(sse_url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generate API key: message construction bug and name mismatch.
generated_api_key is never set; final success message won’t include auth info. Also the displayed key name differs from the created name.
- if should_generate_api_key:
+ if should_generate_api_key:
async with session_scope() as api_key_session:
api_key_create = ApiKeyCreate(name=f"MCP Server {project.name}")
api_key_response = await create_api_key(api_key_session, api_key_create, current_user.id)
- langflow_api_key = api_key_response.api_key
- args.extend(["--headers", "x-api-key", langflow_api_key])
+ generated_api_key = api_key_response.api_key
+ args.extend(["--headers", "x-api-key", generated_api_key])
...
- if generated_api_key:
- auth_type = "API key" if get_settings_service().settings.mcp_composer_enabled else "legacy API key"
- message += f" with {auth_type} authentication (key name: 'MCP Project {project.name} - {body.client}')"
+ if generated_api_key:
+ auth_type = "API key" if get_settings_service().settings.mcp_composer_enabled else "legacy API key"
+ message += f" with {auth_type} authentication (key name: 'MCP Server {project.name}')"Alternatively, change ApiKeyCreate.name to match the message if you prefer that format.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # For non-OAuth (API key or no auth), use mcp-proxy | |
| sse_url = await get_project_sse_url(project_id) | |
| command = "uvx" | |
| args = ["mcp-proxy"] | |
| # Check if we need to add Langflow API key headers | |
| # Necessary only when Project API Key Authentication is enabled | |
| # If we're in WSL and the host is localhost, we might need to adjust the URL | |
| # so Windows applications can reach the WSL service | |
| if host in {"localhost", "127.0.0.1"}: | |
| try: | |
| # Try to get the WSL IP address for host.docker.internal or similar access | |
| # Generate a Langflow API key for auto-install if needed | |
| # Only add API key headers for projects with "apikey" auth type (not "none" or OAuth) | |
| # This might vary depending on WSL version and configuration | |
| proc = await create_subprocess_exec( | |
| "/usr/bin/hostname", | |
| "-I", | |
| stdout=asyncio.subprocess.PIPE, | |
| stderr=asyncio.subprocess.PIPE, | |
| ) | |
| stdout, _ = await proc.communicate() | |
| if should_generate_api_key: | |
| async with session_scope() as api_key_session: | |
| api_key_create = ApiKeyCreate(name=f"MCP Server {project.name}") | |
| api_key_response = await create_api_key(api_key_session, api_key_create, current_user.id) | |
| langflow_api_key = api_key_response.api_key | |
| args.extend(["--headers", "x-api-key", langflow_api_key]) | |
| if proc.returncode == 0 and stdout.strip(): | |
| wsl_ip = stdout.decode().strip().split()[0] # Get first IP address | |
| await logger.adebug("Using WSL IP for external access: %s", wsl_ip) | |
| # Replace the localhost with the WSL IP in the URL | |
| sse_url = sse_url.replace(f"http://{host}:{port}", f"http://{wsl_ip}:{port}") | |
| except OSError as e: | |
| await logger.awarning("Failed to get WSL IP address: %s. Using default URL.", str(e)) | |
| # Base args | |
| args = ["mcp-composer"] if FEATURE_FLAGS.mcp_composer else ["mcp-proxy"] | |
| # Add authentication args based on MCP_COMPOSER feature flag and auth settings | |
| if not FEATURE_FLAGS.mcp_composer: | |
| # When MCP_COMPOSER is disabled, only use headers format if API key was generated | |
| # (when autologin is disabled) | |
| if generated_api_key: | |
| args.extend(["--headers", "x-api-key", generated_api_key]) | |
| elif project.auth_settings: | |
| # Decrypt sensitive fields before using them | |
| decrypted_settings = decrypt_auth_settings(project.auth_settings) | |
| auth_settings = AuthSettings(**decrypted_settings) if decrypted_settings else AuthSettings() | |
| args.extend(["--auth_type", auth_settings.auth_type]) | |
| # When MCP_COMPOSER is enabled, only add headers if auth_type is "apikey" | |
| auth_settings = AuthSettings(**project.auth_settings) | |
| if auth_settings.auth_type == "apikey" and generated_api_key: | |
| args.extend(["--headers", "x-api-key", generated_api_key]) | |
| # If no auth_settings or auth_type is "none", don't add any auth headers | |
| # Add the SSE URL | |
| if FEATURE_FLAGS.mcp_composer: | |
| args.extend(["--sse-url", sse_url]) | |
| else: | |
| # Add the SSE URL for mcp-proxy | |
| args.append(sse_url) | |
| # For non-OAuth (API key or no auth), use mcp-proxy | |
| sse_url = await get_project_sse_url(project_id) | |
| command = "uvx" | |
| args = ["mcp-proxy"] | |
| # Check if we need to add Langflow API key headers | |
| # Necessary only when Project API Key Authentication is enabled | |
| # Generate a Langflow API key for auto-install if needed | |
| # Only add API key headers for projects with "apikey" auth type (not "none" or OAuth) | |
| if should_generate_api_key: | |
| async with session_scope() as api_key_session: | |
| api_key_create = ApiKeyCreate(name=f"MCP Server {project.name}") | |
| api_key_response = await create_api_key(api_key_session, api_key_create, current_user.id) | |
| generated_api_key = api_key_response.api_key | |
| args.extend(["--headers", "x-api-key", generated_api_key]) | |
| # Add the SSE URL for mcp-proxy | |
| args.append(sse_url) | |
| if generated_api_key: | |
| auth_type = "API key" if get_settings_service().settings.mcp_composer_enabled else "legacy API key" | |
| message += f" with {auth_type} authentication (key name: 'MCP Server {project.name}')" |
🤖 Prompt for AI Agents
In src/backend/base/langflow/api/v1/mcp_projects.py around lines 621 to 639, the
code creates an API key but never assigns it to generated_api_key and the
ApiKeyCreate.name value doesn't match the later success message; set
generated_api_key = langflow_api_key after creating the key so the response
message includes the actual key, and either change ApiKeyCreate(name=...) to use
the same display name used in the success message or update the success message
to reference the actual name used when creating the key so names are consistent.
|
|
||
| class SubFlowComponent(Component): | ||
| display_name = "Sub Flow [Deprecated]" | ||
| display_name = "Sub Flow" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Undeprecation label change — confirm docs/UI alignment
Dropping “[Deprecated]” is fine; ensure docs and deprecation banners/tooltips are updated accordingly.
🏁 Script executed:
#!/bin/bash
rg -n "Sub Flow \\[Deprecated\\]|SubFlowComponent" -C2 docs/ src/Length of output: 2993
🏁 Script executed:
#!/bin/bash
find docs -type f -maxdepth 2
rg -n "Sub Flow" -C3 docsLength of output: 931
Remove deprecated label from Sub Flow docs and UI
In docs/docs/Components/components-logic.mdx at line 273, change <summary>Sub Flow (deprecated)</summary> to <summary>Sub Flow</summary> and remove any deprecation callouts or banners/tooltips for this component in the UI.
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/logic/sub_flow.py around line 15 and
docs/docs/Components/components-logic.mdx at line 273, remove the "(deprecated)"
label: ensure the component display_name stays "Sub Flow" (no deprecated tag)
and update the MDX summary from "<summary>Sub Flow (deprecated)</summary>" to
"<summary>Sub Flow</summary>"; also search the UI codebase for any deprecation
banners, tooltips, callouts, or metadata flags related to Sub Flow and remove or
clear them so the component is no longer presented as deprecated in the UI.
| description = "Generates a Component from a Flow, with all of its inputs, and " | ||
| name = "SubFlow" | ||
| legacy: bool = True | ||
| replacement = ["logic.RunFlow"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Replacement mapping — verify target component exists
Validate that logic.RunFlow is present and exported so replacement works at runtime.
🏁 Script executed:
#!/bin/bash
rg -nP 'class\s+RunFlow\b|["'\'']logic\.RunFlow["'\'']' src/backendLength of output: 188
Define or export logic.RunFlow
The replacement mapping in sub_flow.py line 19 and flow_tool.py line 23 refers to logic.RunFlow, but no corresponding class or function is defined or exported in the logic package. Add or export the RunFlow component to prevent runtime errors.
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/logic/sub_flow.py around line 19 (and
noting flow_tool.py line 23), the replacement mapping references logic.RunFlow
but no such class/function is defined or exported; to fix, add a RunFlow
component implementation inside the logic package (or ensure an existing
implementation is exported) and export it from logic/__init__.py (or the
appropriate module) so logic.RunFlow is importable; update any imports in
flow_tool.py/sub_flow.py to import logic.RunFlow from the module where you
implemented it and run tests to confirm no import errors remain.
| icon = "table" | ||
| name = "DataToDataFrame" | ||
| legacy = True | ||
| replacement = ["processing.DataOperations", "processing.TypeConverterComponent"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
New replacement mapping — verify targets exist
Mapping to ["processing.DataOperations", "processing.TypeConverterComponent"] reads fine. Please verify these targets are present in the repo/registry to avoid broken migration hints.
Run:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for the replacement identifiers and candidate component names
rg -nP '"processing\.DataOperations"|"processing\.TypeConverterComponent"'
rg -nP 'name\s*=\s*"TypeConverterComponent"|name\s*=\s*"DataOperations"' --glob 'src/backend/**/components/**/*.py'Length of output: 1726
Missing mapping target — TypeConverterComponent
In src/backend/base/langflow/components/processing/data_to_dataframe.py:17, processing.TypeConverterComponent isn’t defined in the repo; add this component or update the mapping to the correct existing name.
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/processing/data_to_dataframe.py around
line 17, the mapping includes "processing.TypeConverterComponent" which does not
exist in the repository; either add a TypeConverterComponent implementation
under processing with the expected class/name, or update the mapping to the
actual existing component name (for example replace with the correct class path
like "processing.TypeConverter" or the real module/class used elsewhere). Ensure
the chosen name matches the exported class exactly and update any
import/registration points that reference this mapping so tests/imports resolve.
| beta = True | ||
| name = "FilterData" | ||
| legacy = True | ||
| replacement = ["processing.DataOperations"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Replacement mapping added — verify target exists
replacement = ["processing.DataOperations"] looks fine. Please confirm the target component path is valid and recognized by the migration layer.
🏁 Script executed:
#!/bin/bash
rg -nP 'class\s+DataOperations\b|["'\'']processing\.DataOperations["'\'']' -C2Length of output: 5092
Invalid replacement: processing.DataOperations not found
Multiple legacy components reference replacement = ["processing.DataOperations"], but there is no data_operations.py or class DataOperations under src/backend/base/langflow/components/processing. Add the DataOperations component or correct the replacement path.
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/processing/filter_data.py around line
13, the replacement value points to "processing.DataOperations" which doesn't
exist; either create
src/backend/base/langflow/components/processing/data_operations.py exporting
class DataOperations that implements the expected interface (methods/attributes
used by legacy components) and ensure the module path/package init files allow
import as processing.DataOperations, or update the replacement string to the
correct existing module and class name used in this codebase and propagate that
corrected replacement to other legacy component references; also run/import
checks to confirm the new module path resolves without errors.
| replacement = ["processing.DataOperations"] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Verify replacement target exists and is exported
Adding replacement = ["processing.DataOperations"] is fine; please confirm the target component exists and is discoverable by the loader.
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Does a class named DataOperations exist?
rg -nP '^\s*class\s+DataOperations\b' -C2
# Are there import/export paths or usages matching processing.DataOperations?
rg -nP 'processing\.DataOperations' -n -C2Length of output: 0
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Search for DataOperations class definition
rg -nP '^\s*class\s+DataOperations\b' -C2 .
# Search for DataOperation class definition
rg -nP '^\s*class\s+DataOperation\b' -C2 .
# Search for any references to processing.DataOperations in code
rg -nP 'processing\.DataOperations' -n -C2 .
# List files in the processing components directory to locate possible modules
find src/backend/base/langflow/components/processing -maxdepth 1 -type fLength of output: 0
DataOperations target missing
The replacement = ["processing.DataOperations"] entry references a component that isn’t defined or exported anywhere in the codebase. Ensure a DataOperations class or module exists under langflow/components/processing (and is imported/exported as processing.DataOperations), or update the replacement to the correct target.
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/processing/update_data.py around lines
24-25, the replacement value references "processing.DataOperations" which
doesn't exist; either implement a DataOperations class/module under
src/backend/base/langflow/components/processing and export it so it is
importable as processing.DataOperations (add the class, proper methods, and
update the package __init__.py to expose it), or change the replacement string
to the correct existing target name; ensure the module path and export match
exactly the replacement value so imports resolve.
|
|
We're going to take care of merging release-1.6.0 into Main, and then you can cherry-pick your commits. Sorry for the mess, a lot of things changed on Main! |




This PR solves the issue by changing the hardcoded, absolute path to a relative path. This helps the client to build the URL for the SSE (Server-Sent Events) connection correctly, using the basePath ("./") from the proxy. This guarantees that the requests for MCP SSE endpoints are routed properly and allows Langflow to function smoothly behind a reverse proxy.
Issue was originated from:
project_sse_transports[project_id_str] = SseServerTransport(f"/api/v1/mcp/project/{project_id_str}/")
Solution:
project_sse_transports[project_id_str] = SseServerTransport("./")
Summary by CodeRabbit