Skip to content

Conversation

@ogabrielluiz
Copy link
Contributor

@ogabrielluiz ogabrielluiz commented Jul 8, 2025

Summary by CodeRabbit

  • New Features

    • Added platform-specific cURL code generation in the API modal, allowing users to select between Unix and PowerShell formats with appropriate syntax and environment variable checks.
    • Enhanced Language Model components to support both OpenAI chat and reasoning models, with dynamic UI updates for model selection and input visibility.
    • Improved export options for Docling documents, with real-time UI updates for format-specific configuration.
  • Bug Fixes

    • Refined error messages for missing document fields and improved error handling when required packages are missing.
    • Fixed issues with asynchronous helper execution in environments with active event loops.
  • Refactor

    • Updated internal imports and type annotations for better compatibility and error handling.
    • Switched to string-based tab selection and simplified UI logic in API modal code tabs.
  • Style

    • Adjusted CSS for API modal tabs to improve layout consistency.
  • Tests

    • Expanded and updated unit and integration tests for new model options, asynchronous helpers, and API code generation, including conditional test execution based on optional dependencies.
  • Chores

    • Updated version numbers for backend and frontend packages.
    • Upgraded and cleaned up dependencies in configuration files.

ogabrielluiz and others added 8 commits July 7, 2025 23:03
- Updated langflow version to 1.5.0 in pyproject.toml, package.json, and package-lock.json.
- Updated langflow-base dependency to version 0.5.0.
- Added platform markers for several dependencies in uv.lock to improve compatibility across different systems.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…ailures (#8890)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
fix: fixes auth check for auto_login  (#8796)
* Add new openai reasoning models

* [autofix.ci] apply automated fixes

* Updates language model, but FE doesn't send a POST for updating template atm

* use chatopenai constants

* [autofix.ci] apply automated fixes

* Add reasoning to language model test

* Remove temp from all reasoning models

* t [autofix.ci] apply automated fixes

* refactor: Update template notes (#8816)

* update templates

* small-changes

* template cleanup

---------

Co-authored-by: Mendon Kissling <[email protected]>

* ruff

* uv lock

* starter projects update

* [autofix.ci] apply automated fixes

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Mendon Kissling <[email protected]>
* chore: Bump version to 1.5.0 and update dependencies

- Updated langflow version to 1.5.0 in pyproject.toml, package.json, and package-lock.json.
- Updated langflow-base dependency to version 0.5.0.
- Added platform markers for several dependencies in uv.lock to improve compatibility across different systems.

* fix: fixes auth check for auto_login  (#8796)

* ref: improve docling template updates and error message (#8837)

Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>

* Attempt to provide powershell curl command

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* Added OS selector to code tabs

* Added no select classes to API modal

* ✨ (code-tabs.tsx): add data-testid attribute to API tab elements for testing purposes
🔧 (tweaksTest.spec.ts, curlApiGeneration.spec.ts, pythonApiGeneration.spec.ts, generalBugs-shard-3.spec.ts): update test scripts to use data-testid attribute for API tab elements instead of role attribute

---------

Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Lucas Oliveira <[email protected]>
Co-authored-by: cristhianzl <[email protected]>
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jul 8, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 8, 2025

Walkthrough

This update introduces platform-specific cURL code generation and UI in the API modal, expands OpenAI model support by distinguishing between chat and reasoning models across backend and starter project files, and refactors CrewAI component imports for safer optional dependency handling. Several UI and test adjustments accompany these changes, along with dependency and version updates in both backend and frontend.

Changes

Files/Paths Change Summary
.github/workflows/release.yml
pyproject.toml
src/backend/base/pyproject.toml
src/frontend/package.json
Updated workflow to inherit secrets; incremented project and dependency versions.
src/backend/base/langflow/base/agents/crewai/crew.py
src/backend/base/langflow/base/agents/crewai/tasks.py
src/backend/base/langflow/components/crewai/crewai.py
src/backend/base/langflow/components/crewai/hierarchical_crew.py
src/backend/base/langflow/components/crewai/hierarchical_task.py
src/backend/base/langflow/components/crewai/sequential_crew.py
src/backend/base/langflow/components/crewai/sequential_task.py
src/backend/base/langflow/components/crewai/sequential_task_agent.py
Refactored CrewAI-related components: moved imports inside functions with import guards, removed explicit type annotations, added legacy = True attributes, and improved error messages for missing dependencies.
src/backend/base/langflow/base/data/docling_utils.py
src/backend/base/langflow/base/models/model.py
Improved error messages and simplified local variable usage.
src/backend/base/langflow/base/models/openai_constants.py
src/backend/base/langflow/utils/constants.py
src/backend/base/langflow/utils/util.py
Split OpenAI model constants into chat and reasoning categories; updated related model lists and mapping logic.
src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py
src/backend/base/langflow/components/models/language_model.py
src/backend/base/langflow/components/openai/openai_chat_model.py
Updated imports and logic to use new chat/reasoning model lists; added dynamic UI and parameter handling for reasoning models.
src/backend/base/langflow/components/data/url.py Reformatted URL regex and changed logging from info to debug.
src/backend/base/langflow/components/docling/export_docling_document.py Added update_build_config for dynamic UI based on export format.
src/backend/base/langflow/custom/custom_component/component.py Made return type extraction robust to missing annotations.
src/backend/base/langflow/initial_setup/load.py Removed unused CrewAI starter project graphs.
src/backend/base/langflow/initial_setup/starter_projects/*.json Updated embedded LanguageModelComponent code: split model lists, added dynamic UI for system message, handled temperature for reasoning models, and adjusted related imports and options.
src/backend/base/langflow/services/auth/utils.py Refined control flow in API key security logic.
src/backend/base/langflow/utils/async_helpers.py Refactored run_until_complete to handle running event loops by using a thread pool executor.
src/backend/tests/unit/api/v1/test_starter_projects.py Improved test assertion error messages.
src/backend/tests/unit/components/agents/test_agent_component.py
src/backend/tests/unit/components/models/test_language_model_component.py
Updated tests to use new OpenAI model lists.
src/backend/tests/unit/custom/custom_component/test_component.py Skipped CrewAI-dependent test if CrewAI is not installed.
src/backend/tests/unit/test_async_helpers.py Added comprehensive tests for run_until_complete in various async/threading scenarios.
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx
Refactored API modal tabs to use string keys; added platform-specific cURL code generation and UI; improved copy-to-clipboard logic.
src/frontend/src/modals/apiModal/index.tsx
src/frontend/src/style/applies.css
UI tweaks: made tweaks button unselectable and adjusted modal tab content spacing.
src/frontend/tests/core/features/tweaksTest.spec.ts
src/frontend/tests/extended/features/curlApiGeneration.spec.ts
src/frontend/tests/extended/features/pythonApiGeneration.spec.ts
src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts
Updated tests to use test ID selectors for API modal tabs.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant API_Modal_UI
    participant getNewCurlCode
    participant Clipboard

    User->>API_Modal_UI: Opens API modal
    API_Modal_UI->>API_Modal_UI: Detects OS (macOS/Linux or Windows)
    API_Modal_UI->>getNewCurlCode: Requests cURL code (with platform param)
    getNewCurlCode-->>API_Modal_UI: Returns platform-specific cURL command
    User->>API_Modal_UI: Selects/copies cURL code
    API_Modal_UI->>Clipboard: Copies selected code
Loading
sequenceDiagram
    participant User
    participant LanguageModelComponent
    participant UI

    User->>LanguageModelComponent: Selects OpenAI model
    LanguageModelComponent->>UI: Updates model dropdown (chat + reasoning models)
    LanguageModelComponent->>UI: Hides/shows system_message input (if model starts with "o1")
    LanguageModelComponent->>LanguageModelComponent: Sets temperature=None if reasoning model
Loading

Possibly related PRs

  • langflow-ai/langflow#8923: Refactors CrewAI components with import guards and legacy flags, matching the same changes here.
  • langflow-ai/langflow#8786: Updates OpenAI model handling to split chat and reasoning models, similar to this PR's backend and UI changes.
  • langflow-ai/langflow#8889: Adds platform-specific cURL code/UI in the API modal, directly related to the cURL changes in this PR.

Suggested labels

enhancement, size:M, lgtm

Suggested reviewers

  • lucaseduoli
  • mfortman11
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch release-1.5.0

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@coderabbitai coderabbitai bot changed the title @coderabbitai feat: add platform-specific cURL generation and expand OpenAI model support Jul 8, 2025
@github-actions github-actions bot added the enhancement New feature or request label Jul 8, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🔭 Outside diff range comments (18)
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1)

1045-1068: Guard against None temperature being passed into ChatOpenAI.

ChatOpenAI expects float | None for temperature, however some versions treat None as “parameter not supplied” and others raise a TypeError. Instead of always forwarding None, build the kwargs dynamically:

-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            kwargs = dict(
+                model_name=model_name,
+                streaming=stream,
+                openai_api_key=self.api_key,
+            )
+            if temperature is not None:  # chat-models only
+                kwargs["temperature"] = temperature
+            return ChatOpenAI(**kwargs)

This prevents unexpected runtime errors when the reasoning models are selected.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

2275-2366: update_build_config leaves system_message permanently hidden after switching away from o1 models

When the provider is changed (e.g., from OpenAI ➔ Anthropic) update_build_config updates the model_name dropdown but never re-enables the system_message field if it had previously been hidden by an o1 model selection.
Result: users of non-OpenAI providers can no longer set a system prompt.

Minimal fix inside the provider branch:

elif field_name == "provider":
     ...
+    # Always ensure system_message is visible when changing provider
+    if "system_message" in build_config:
+        build_config["system_message"]["show"] = True

Add before the return at the end of that branch.

src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)

1375-1410: Avoid passing None for temperature; can break ChatOpenAI initialisation

ChatOpenAI (LangChain ≤ 0.2) expects a float; giving None will raise TypeError.
Instead of forcing temperature=None, omit the parameter when the model is in OPENAI_REASONING_MODEL_NAMES.

-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
-
-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            openai_kwargs = {
+                "model_name": model_name,
+                "streaming": stream,
+                "openai_api_key": self.api_key,
+            }
+            # Reasoning models currently ignore temperature; only add when supported
+            if model_name not in OPENAI_REASONING_MODEL_NAMES:
+                openai_kwargs["temperature"] = temperature
+
+            return ChatOpenAI(**openai_kwargs)
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)

2563-2620: system_message can stay hidden when switching providers

update_build_config hides the system_message field when an OpenAI o1 model is selected, but never restores it when the user later switches to Anthropic/Google or a non-o1 OpenAI model via a provider change. The UI will therefore permanently lose that input until the page is refreshed.

@@
         if field_name == "provider":
+            # Ensure system_message is always visible when leaving OpenAI-o1 context
+            if field_value != "OpenAI" and "system_message" in build_config:
+                build_config["system_message"]["show"] = True
+
             if field_value == "OpenAI":
                 ...
-            elif field_value == "Anthropic":
+            elif field_value == "Anthropic":
                 ...
+                if "system_message" in build_config:
+                    build_config["system_message"]["show"] = True
             elif field_value == "Google":
                 ...
+                if "system_message" in build_config:
+                    build_config["system_message"]["show"] = True

This guarantees the field visibility is reset whenever the provider no longer imposes the o1 limitation.


2563-2590: Class body duplicated twice in the same flow

The identical LanguageModelComponent source string is embedded in two separate nodes. Keeping two diverging copies will inevitably drift. Extract a single component in src/backend/base/langflow/components/models/language_model.py and reference it from both nodes.

src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (2)

2626-2643: Potential temperature=None incompatibility with ChatOpenAI

ChatOpenAI (langchain_openai) expects temperature: float | None = 0.7 on its pydantic model.
Early builds of the provider accepted None, but recent releases validate the field as float ≥ 0. Passing None can therefore raise a ValidationError at runtime with the latest LangChain.

-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
+            if model_name in OPENAI_REASONING_MODEL_NAMES:
+                # reasoning models ignore temperature – keep the slider visible
+                # but force-set a neutral value accepted by the SDK.
+                temperature = 0.0

A defensive 0.0 keeps API parity while effectively disabling randomness.
Consider also hiding / disabling the Temperature slider in the UI for reasoning models for clarity.


2660-2674: update_build_config misses “o1” visibility toggle after provider switch

When the user switches the provider back to OpenAI, the model_name field is repopulated but the system_message visibility is not re-evaluated. This leaves the field visible even though the default model may start with o1, contradicting the logic below.

Add a post-update sanity check:

             build_config["model_name"]["value"] = OPENAI_CHAT_MODEL_NAMES[0]
             build_config["api_key"]["display_name"] = "OpenAI API Key"
+            # Re-apply system_message visibility based on the new default
+            if OPENAI_CHAT_MODEL_NAMES[0].startswith("o1"):
+                build_config["system_message"]["show"] = False
+            else:
+                build_config["system_message"]["show"] = True

Applies symmetrically when provider is changed away from or back to OpenAI.

src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)

2068-2090: Starter-project template still ships only chat models – reasoning models won’t be selectable on first load

inputs[1] now correctly builds the runtime template with
options = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES, but the persisted template that ships with this starter-project (see "model_name".options a few lines below) is still the old hard-coded chat-only list.

When users open this project for the very first time (provider already = "OpenAI"), update_build_config will not be invoked, so the reasoning models are invisible until the user manually flips the provider away and back. That defeats the purpose of exposing those new models.

Either:

-          "options": ["gpt-4o-mini", "gpt-4o", ...]      # chat-only
+          "options": [],                                  # leave empty

and rely on update_build_config to populate the list on component initialisation, or regenerate the starter-project JSON after the code change so that both chat and reasoning models are already present.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

2060-2075: UI hiding logic can desynchronise when switching providers

update_build_config() hides system_message when model_name starts with "o1", but only if the current provider is OpenAI. If the user:

  1. Selects an "o1" model (field hidden), then
  2. Switches provider to Anthropic / Google,

system_message.show remains False, silently hiding a perfectly valid field.

Consider resetting the flag on every provider change:

if field_name == "provider":
     ...
+    # Always re-enable system_message when provider changes
+    if "system_message" in build_config:
+        build_config["system_message"]["show"] = True
src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2)

1531-1587: update_build_config leaves system_message permanently hidden when the user changes provider

If the user selects an OpenAI o1-* model, system_message.show is set to False.
Later, if the user switches the provider (e.g. to Anthropic or Google), the flag is never reset because the provider branch does not touch system_message.

Result: for every non-OpenAI provider the System Message field may stay invisible, breaking the UI.

@@
-        if field_name == "provider":
+        if field_name == "provider":
             ...
+            # (Re)-enable system_message visibility when leaving OpenAI-o1 context
+            if "system_message" in build_config:
+                build_config["system_message"]["show"] = True

This one-liner guarantees consistent behaviour across provider switches.


1600-1642: Temperature slider still shown for reasoning models that ignore temperature

build_model overrides temperature = None for models in OPENAI_REASONING_MODEL_NAMES, but the Temperature slider remains visible and editable in the UI.

Consider hiding or disabling the slider when a reasoning model is selected to avoid confusion:

@@
 elif field_name == "model_name":
     ...
+    # Hide temperature control for reasoning models (no-op)
+    if field_value in OPENAI_REASONING_MODEL_NAMES:
+        build_config["temperature"]["show"] = False
+    else:
+        build_config["temperature"]["show"] = True
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (2)

1305-1340: Prefix check breaks system_message toggle for reasoning models

update_build_config hides the system_message input only when
field_value.startswith("o1"), but every entry in OPENAI_REASONING_MODEL_NAMES
is prefixed with gpt-4o (e.g. gpt-4o-mini).
Consequently the field is never hidden for reasoning models and UI/UX
divergence sneaks in.

-elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI":
+elif (
+    field_name == "model_name"
+    and field_value in OPENAI_REASONING_MODEL_NAMES
+    and self.provider == "OpenAI"
+):

Mirror the same condition in the “show again” branch.
Without this fix users can enter a system message that the backend model then
ignores or errors on.


1340-1360: system_message visibility not reset when provider changes

If a user selects a reasoning model (field hidden) and then switches the
provider to Anthropic/Google, system_message remains hidden forever because
the provider branch never re-enables it.

if field_name == "provider":
     ...
     build_config["api_key"]["display_name"] = "Anthropic API Key"
+    # restore fields that might have been hidden by a previous selection
+    if "system_message" in build_config:
+        build_config["system_message"]["show"] = True

Do the same for the Google branch. This guarantees a predictable UI across
provider switches.

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)

1528-1545: system_message visibility toggle lacks provider guard

When field_name == "model_name" and the model does not start with "o1", the code unconditionally sets system_message.show = True, even for Anthropic/Google providers where the field might already have a different visibility state.
Safer logic:

-        elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config:
+        elif (
+            field_name == "model_name"
+            and self.provider == "OpenAI"
+            and not field_value.startswith("o1")
+            and "system_message" in build_config
+        ):

Prevents accidental UI flicker for non-OpenAI providers.


1495-1511: Enforce float temperature for ChatOpenAI

ChatOpenAI requires a float between 0 and 2; passing None will raise a ValidationError. For reasoning models—where you want deterministic output—set temperature to 0 instead of None:

  • File: src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json
    Location: build_model, inside the if model_name in OPENAI_REASONING_MODEL_NAMES block
-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
+            if model_name in OPENAI_REASONING_MODEL_NAMES:
+                # reasoning models do not support temperature; use deterministic output
+                temperature = 0

This change prevents a runtime ValidationError from ChatOpenAI.

src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)

1638-1679: Guard against temperature=None incompatibility in ChatOpenAI call

Some LangChain providers (incl. ChatOpenAI) still validate temperature as float | int and raise on None.
To stay API-safe for older library patch levels, pass the argument only when a numeric value is required:

-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
-
-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            if model_name in OPENAI_REASONING_MODEL_NAMES:
+                # reasoning models do not support temperature (yet)
+                return ChatOpenAI(
+                    model_name=model_name,
+                    streaming=stream,
+                    openai_api_key=self.api_key,
+                )
+            return ChatOpenAI(
+                model_name=model_name,
+                temperature=temperature,
+                streaming=stream,
+                openai_api_key=self.api_key,
+            )

This avoids a potential TypeError on older LangChain versions and keeps the public signature intact.

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (2)

1171-1175: Static “langflow” User-Agent may reduce crawl success

The default header has been changed from the dynamic get_settings_service().settings.user_agent value to the hard-coded string "langflow".
Many sites block or throttle requests with non-standard or generic user-agents, so hard-coding this could noticeably lower the success rate of URLComponent.

-            value=[{"key": "User-Agent", "value": get_settings_service().settings.user_agent}],
+            value=[{"key": "User-Agent", "value": get_settings_service().settings.user_agent}],

If you really need a fixed UA, at least preserve the old behaviour as a fallback when the user leaves the field blank.


1409-1450: Passing temperature=None to ChatOpenAI is risky

ChatOpenAI expects temperature to be a float (0-2). Passing None relies on internal default handling that may change and could raise TypeError in future SDK versions.

-                # reasoning models do not support temperature (yet)
-                temperature = None
+                # Reasoning models currently ignore temperature – pick safe default
+                temperature = 0.0

Alternatively, omit the parameter entirely when it is None.

♻️ Duplicate comments (7)
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (1)

2856-2915: Same issues as above – see previous comments.

src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)

1819-1881: Same issue as above — duplicate copy of LanguageModelComponent

The second embedded LanguageModelComponent repeats the exact code and therefore inherits the UI-visibility bug for system_message and the temperature slider. Apply the same fixes to keep both nodes in sync.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (2)

1601-1630: Duplicate of the first issue for the second LanguageModelComponent block –
please apply the same prefix-check fix here.


1896-1925: Duplicate of the first two issues for the third LanguageModelComponent block –
ensure both fixes propagate.

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)

1791-1807: Duplicate of the issues raised for lines 1495-1550 (temperature handling & provider guard).


2086-2102: Duplicate of the issues raised for lines 1495-1550 (temperature handling & provider guard).

src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)

1928-1969: Same temperature=None concern applies here – see previous comment.

🧹 Nitpick comments (15)
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)

92-95: Consider simplifying the JSON formatting for better consistency.

The current formatting mixes spaces and tabs which may render inconsistently across different terminals. Consider using only spaces or only tabs for indentation.

-    const unixFormattedPayload = JSON.stringify(processedPayload, null, 2)
-      .split("\n")
-      .map((line, index) => (index === 0 ? line : "         " + line))
-      .join("\n\t\t");
+    const unixFormattedPayload = JSON.stringify(processedPayload, null, 2)
+      .split("\n")
+      .map((line, index) => (index === 0 ? line : "          " + line))
+      .join("\n          ");
src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)

1082-1082: Deduplicate chat & reasoning model lists

OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES risks duplicate entries if a model appears in both constants.
A tiny one-liner avoids duplicates while preserving order:

- options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,
+ options=list(dict.fromkeys(OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES)),
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (2)

1095-1120: Hide the temperature slider when a reasoning model is chosen.

update_build_config correctly toggles the system_message field for o1* models, but the temperature input remains visible even though it will be ignored (and may mislead users). Extend the conditional block:

-        elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI":
+        elif field_name == "model_name" and field_value in OPENAI_REASONING_MODEL_NAMES:
             # Hide system_message for o1 models - currently unsupported
             if "system_message" in build_config:
                 build_config["system_message"]["show"] = False
+            if "temperature" in build_config:
+                build_config["temperature"]["show"] = False
-        elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config:
+        elif field_name == "model_name" and field_value not in OPENAI_REASONING_MODEL_NAMES:
             build_config["system_message"]["show"] = True
+            if "temperature" in build_config:
+                build_config["temperature"]["show"] = True

Keeping the UI in sync with backend capabilities avoids confusion.


1121-1135: Synchronise API-key placeholders / default values with the selected provider.

update_build_config alters display_name but leaves the default value set to "OPENAI_API_KEY". Users switching to Anthropic or Google will still see the OpenAI placeholder, which often causes 401 errors at runtime.

-                build_config["api_key"]["display_name"] = "Anthropic API Key"
+                build_config["api_key"]["display_name"] = "Anthropic API Key"
+                build_config["api_key"]["value"] = "ANTHROPIC_API_KEY"
...
-                build_config["api_key"]["display_name"] = "Google API Key"
+                build_config["api_key"]["display_name"] = "Google API Key"
+                build_config["api_key"]["value"] = "GOOGLE_API_KEY"

A matching placeholder greatly improves onboarding and reduces mis-configuration.

src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)

1325-1345: Minor: inputs default still shows “OpenAI API Key” for non-OpenAI providers

update_build_config updates the field’s display_name when the provider changes, but the initial static definition hard-codes “OpenAI API Key”. Consider moving the generic label (e.g. “Provider API Key”) to the class-level constant to avoid a brief mismatch in the UI before the first provider change event fires.

src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (1)

2591-2605: Minor: build-time option list creation looks expensive on every import

options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES is evaluated at import time for every node instantiation. Cache once:

-        DropdownInput(
-            name="model_name",
-            display_name="Model Name",
-            options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,
+        _OPENAI_MODEL_OPTIONS = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES
+        DropdownInput(
+            name="model_name",
+            display_name="Model Name",
+            options=_OPENAI_MODEL_OPTIONS,

A micro-optimisation but keeps the template string shorter.

src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)

1835-1854: Guard against temperature=None being forwarded to the provider

ChatOpenAI accepts a float for temperature; some provider adapters treat None as an invalid value and will raise a validation error.
Rather than always passing the key, build the kwargs dynamically so that the field is omitted when you intentionally disable temperature for reasoning models.

-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            kwargs = {
+                "model_name": model_name,
+                "streaming": stream,
+                "openai_api_key": self.api_key,
+            }
+            if temperature is not None:
+                kwargs["temperature"] = temperature
+            return ChatOpenAI(**kwargs)

This keeps the constructor call future-proof and avoids surprising runtime errors if the upstream library tightens its validation.


1860-1879: Reduce duplication in update_build_config with a provider-map

The three if/elif blocks that mutate model_name.options/value and the api_key.display_name differ only in the constant lists and label. Consider collapsing them into a single lookup table to make future provider additions one-liner changes:

PROVIDER_CONFIG = {
    "OpenAI": (
        OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,
        OPENAI_CHAT_MODEL_NAMES[0],
        "OpenAI API Key",
    ),
    "Anthropic": (ANTHROPIC_MODELS, ANTHROPIC_MODELS[0], "Anthropic API Key"),
    "Google": (GOOGLE_GENERATIVE_AI_MODELS, GOOGLE_GENERATIVE_AI_MODELS[0], "Google API Key"),
}

if field_name == "provider" and field_value in PROVIDER_CONFIG:
    opts, default, key_label = PROVIDER_CONFIG[field_value]
    build_config["model_name"]["options"] = opts
    build_config["model_name"]["value"] = default
    build_config["api_key"]["display_name"] = key_label

This trims ~20 lines of repetitive code and makes the intent clearer.

src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2)

2091-2113: Avoid sending an explicit temperature=None to ChatOpenAI

langchain-openai currently types temperature: float | None, but in practice None is passed straight through to the OpenAI HTTP API, which rejects it for reasoning models (o1*).
Skip the kwarg when it is not applicable:

-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            temp_kw = {} if temperature is None else {"temperature": temperature}
+            return ChatOpenAI(
+                model_name=model_name,
+                streaming=stream,
+                openai_api_key=self.api_key,
+                **temp_kw,
+            )

This keeps the signature clean and prevents avoidable 400-errors from the OpenAI endpoint.


2361-2380: Duplicate component code block – consider DRYing via a shared module

The second LanguageModelComponent embeds an identical 140-line definition that already exists in the earlier node. Duplicating sizeable code strings inside starter-project JSONs increases bundle size and raises the maintenance burden (future fixes must be applied twice).

If both nodes need the same class unchanged, import it from a single custom component module and reference that instead of inlining two copies.

src/backend/base/langflow/components/models/language_model.py (1)

138-143: Consider making the reasoning model detection more robust.

The hardcoded "o1" prefix check works for current OpenAI reasoning models but may be brittle if OpenAI changes their naming convention. Consider using the OPENAI_REASONING_MODEL_NAMES list for more robust detection.

-        elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI":
+        elif field_name == "model_name" and field_value in OPENAI_REASONING_MODEL_NAMES and self.provider == "OpenAI":
-        elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config:
+        elif field_name == "model_name" and field_value not in OPENAI_REASONING_MODEL_NAMES and "system_message" in build_config:
src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

2323-2486: Exact duplicate of the component code – favour DRY

The second LanguageModelComponent block (lines 2323-2486) is an identical copy of the first one. Keeping two embedded copies inside the same starter-project JSON:

  • inflates bundle size,
  • complicates future fixes (must patch in two places),
  • risks the copies diverging.

Store the component once and reference it twice (e.g. by node id) or factor the code into an importable module and import it via code field.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)

995-995: Hide/disable temperature slider for reasoning models

o1/reasoning models silently override the UI-selected temperature with None.
Better UX: toggle the Temperature field’s show flag to False when a reasoning model is chosen (similar to how system_message is hidden) so users don’t think the knob still applies.

Implementation fits naturally inside the existing update_build_config branch that checks model_name.

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)

1495-1550: Identical 300-line component code duplicated three times

The full LanguageModelComponent source is embedded verbatim in three separate nodes. That bloats the starter-project JSON by ~900 lines, complicates future maintenance, and risks silent drift between copies.

Consider:

  1. Storing the component once (e.g., langflow/custom_components/language_model_component.py).
  2. Referencing it in nodes via the module_name metadata instead of embedding raw code.

Keeps starter projects small and avoids triple edits next time you tweak the component.

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)

1519-1525: system_message flagged as advanced in template but not in code

The backend inputs definition sets advanced=False, yet the rendered template still shows "advanced": true.
This mismatch causes inconsistent UI behaviour across starter projects. Align the flags so both frontend and backend agree.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7aeb687 and 9ad92ea.

⛔ Files ignored due to path filters (2)
  • src/frontend/package-lock.json is excluded by !**/package-lock.json
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (59)
  • .github/workflows/release.yml (2 hunks)
  • pyproject.toml (3 hunks)
  • src/backend/base/langflow/base/agents/crewai/crew.py (6 hunks)
  • src/backend/base/langflow/base/agents/crewai/tasks.py (1 hunks)
  • src/backend/base/langflow/base/data/docling_utils.py (1 hunks)
  • src/backend/base/langflow/base/models/model.py (2 hunks)
  • src/backend/base/langflow/base/models/openai_constants.py (4 hunks)
  • src/backend/base/langflow/components/crewai/crewai.py (2 hunks)
  • src/backend/base/langflow/components/crewai/hierarchical_crew.py (2 hunks)
  • src/backend/base/langflow/components/crewai/hierarchical_task.py (1 hunks)
  • src/backend/base/langflow/components/crewai/sequential_crew.py (2 hunks)
  • src/backend/base/langflow/components/crewai/sequential_task.py (2 hunks)
  • src/backend/base/langflow/components/crewai/sequential_task_agent.py (2 hunks)
  • src/backend/base/langflow/components/data/url.py (3 hunks)
  • src/backend/base/langflow/components/docling/export_docling_document.py (3 hunks)
  • src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py (2 hunks)
  • src/backend/base/langflow/components/models/language_model.py (6 hunks)
  • src/backend/base/langflow/components/openai/openai_chat_model.py (5 hunks)
  • src/backend/base/langflow/custom/custom_component/component.py (1 hunks)
  • src/backend/base/langflow/initial_setup/load.py (0 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Simple Agent.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1 hunks)
  • src/backend/base/langflow/services/auth/utils.py (2 hunks)
  • src/backend/base/langflow/utils/async_helpers.py (1 hunks)
  • src/backend/base/langflow/utils/constants.py (2 hunks)
  • src/backend/base/langflow/utils/util.py (1 hunks)
  • src/backend/base/pyproject.toml (1 hunks)
  • src/backend/tests/unit/api/v1/test_starter_projects.py (1 hunks)
  • src/backend/tests/unit/components/agents/test_agent_component.py (2 hunks)
  • src/backend/tests/unit/components/models/test_language_model_component.py (2 hunks)
  • src/backend/tests/unit/custom/custom_component/test_component.py (2 hunks)
  • src/backend/tests/unit/test_async_helpers.py (1 hunks)
  • src/frontend/package.json (1 hunks)
  • src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (3 hunks)
  • src/frontend/src/modals/apiModal/index.tsx (1 hunks)
  • src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1 hunks)
  • src/frontend/src/style/applies.css (1 hunks)
  • src/frontend/tests/core/features/tweaksTest.spec.ts (2 hunks)
  • src/frontend/tests/extended/features/curlApiGeneration.spec.ts (1 hunks)
  • src/frontend/tests/extended/features/pythonApiGeneration.spec.ts (1 hunks)
  • src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • src/backend/base/langflow/initial_setup/load.py
🧰 Additional context used
📓 Path-based instructions (14)
`src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*}`: Fr...

src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*}: Frontend configuration files such as 'package.json', 'tsconfig.json', 'tailwind.config.', and 'vite.config.' must be present and properly maintained in 'src/frontend/'.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)

List of files the instruction was applied to:

  • src/frontend/package.json
`src/frontend/**/*.{ts,tsx,js,jsx,css,scss}`: Use Tailwind CSS for styling all frontend components.

src/frontend/**/*.{ts,tsx,js,jsx,css,scss}: Use Tailwind CSS for styling all frontend components.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)

List of files the instruction was applied to:

  • src/frontend/src/style/applies.css
  • src/frontend/src/modals/apiModal/index.tsx
  • src/frontend/tests/core/features/tweaksTest.spec.ts
  • src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts
  • src/frontend/tests/extended/features/curlApiGeneration.spec.ts
  • src/frontend/tests/extended/features/pythonApiGeneration.spec.ts
  • src/frontend/src/modals/apiModal/utils/get-curl-code.tsx
  • src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx
`src/backend/base/langflow/components/**/*.py`: Add new backend components to th...

src/backend/base/langflow/components/**/*.py: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Implement async component methods using async def and await for asynchronous operations
Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately

📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

List of files the instruction was applied to:

  • src/backend/base/langflow/components/crewai/hierarchical_task.py
  • src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py
  • src/backend/base/langflow/components/data/url.py
  • src/backend/base/langflow/components/crewai/crewai.py
  • src/backend/base/langflow/components/crewai/sequential_task.py
  • src/backend/base/langflow/components/crewai/sequential_crew.py
  • src/backend/base/langflow/components/crewai/sequential_task_agent.py
  • src/backend/base/langflow/components/crewai/hierarchical_crew.py
  • src/backend/base/langflow/components/docling/export_docling_document.py
  • src/backend/base/langflow/components/openai/openai_chat_model.py
  • src/backend/base/langflow/components/models/language_model.py
`src/backend/**/*.py`: Run make format_backend to format Python code early and often Run make lint to check for linting issues in backend Python code

src/backend/**/*.py: Run make format_backend to format Python code early and often
Run make lint to check for linting issues in backend Python code

📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

List of files the instruction was applied to:

  • src/backend/base/langflow/components/crewai/hierarchical_task.py
  • src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py
  • src/backend/tests/unit/api/v1/test_starter_projects.py
  • src/backend/base/langflow/base/data/docling_utils.py
  • src/backend/base/langflow/components/data/url.py
  • src/backend/base/langflow/base/models/model.py
  • src/backend/base/langflow/base/agents/crewai/tasks.py
  • src/backend/base/langflow/utils/util.py
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/base/langflow/components/crewai/crewai.py
  • src/backend/base/langflow/utils/async_helpers.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/base/langflow/base/models/openai_constants.py
  • src/backend/base/langflow/components/crewai/sequential_task.py
  • src/backend/base/langflow/components/crewai/sequential_crew.py
  • src/backend/base/langflow/components/crewai/sequential_task_agent.py
  • src/backend/base/langflow/utils/constants.py
  • src/backend/base/langflow/components/crewai/hierarchical_crew.py
  • src/backend/tests/unit/test_async_helpers.py
  • src/backend/base/langflow/components/docling/export_docling_document.py
  • src/backend/base/langflow/services/auth/utils.py
  • src/backend/base/langflow/custom/custom_component/component.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
  • src/backend/base/langflow/components/openai/openai_chat_model.py
  • src/backend/base/langflow/components/models/language_model.py
  • src/backend/base/langflow/base/agents/crewai/crew.py
`src/backend/**/components/**/*.py`: In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).

src/backend/**/components/**/*.py: In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

📄 Source: CodeRabbit Inference Engine (.cursor/rules/icons.mdc)

List of files the instruction was applied to:

  • src/backend/base/langflow/components/crewai/hierarchical_task.py
  • src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py
  • src/backend/base/langflow/components/data/url.py
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/base/langflow/components/crewai/crewai.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/base/langflow/components/crewai/sequential_task.py
  • src/backend/base/langflow/components/crewai/sequential_crew.py
  • src/backend/base/langflow/components/crewai/sequential_task_agent.py
  • src/backend/base/langflow/components/crewai/hierarchical_crew.py
  • src/backend/base/langflow/components/docling/export_docling_document.py
  • src/backend/base/langflow/components/openai/openai_chat_model.py
  • src/backend/base/langflow/components/models/language_model.py
`src/frontend/**/*.{ts,tsx}`: Use React 18 with TypeScript for all UI components and frontend logic.

src/frontend/**/*.{ts,tsx}: Use React 18 with TypeScript for all UI components and frontend logic.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)

List of files the instruction was applied to:

  • src/frontend/src/modals/apiModal/index.tsx
  • src/frontend/tests/core/features/tweaksTest.spec.ts
  • src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts
  • src/frontend/tests/extended/features/curlApiGeneration.spec.ts
  • src/frontend/tests/extended/features/pythonApiGeneration.spec.ts
  • src/frontend/src/modals/apiModal/utils/get-curl-code.tsx
  • src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx
`src/backend/tests/unit/**/*.py`: Use in-memory SQLite for database tests Test c...

src/backend/tests/unit/**/*.py: Use in-memory SQLite for database tests
Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs

📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/api/v1/test_starter_projects.py
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/tests/unit/test_async_helpers.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
`src/backend/tests/**/*.py`: Unit tests for backend code should be located in 's...

src/backend/tests/**/*.py: Unit tests for backend code should be located in 'src/backend/tests/' and organized by component subdirectory for component tests.
Test files should use the same filename as the component with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests, as defined in 'src/backend/tests/conftest.py'.
Skip client creation in tests by marking them with '@pytest.mark.noclient' when the 'client' fixture is not needed.
Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/api/v1/test_starter_projects.py
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/tests/unit/test_async_helpers.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
`{src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/...

{src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py}: Each test should have a clear docstring explaining its purpose.
Complex test setups should be commented, and mock usage should be documented within the test code.
Expected behaviors should be explicitly stated in test docstrings or comments.
Create comprehensive unit tests for all new components.
Test both sync and async code paths in components.
Mock external dependencies appropriately in tests.
Test error handling and edge cases in components.
Validate input/output behavior in tests.
Test component initialization and configuration.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/api/v1/test_starter_projects.py
  • src/frontend/tests/core/features/tweaksTest.spec.ts
  • src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts
  • src/frontend/tests/extended/features/curlApiGeneration.spec.ts
  • src/frontend/tests/extended/features/pythonApiGeneration.spec.ts
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/tests/unit/test_async_helpers.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
`{src/backend/tests/**/*.py,tests/**/*.py}`: Use '@pytest.mark.asyncio' for asyn...

{src/backend/tests/**/*.py,tests/**/*.py}: Use '@pytest.mark.asyncio' for async test functions.
Test queue operations in async tests using 'asyncio.Queue' and non-blocking put/get methods.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests.
Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Each test should ensure proper resource cleanup, especially in async fixtures using 'try/finally' and cleanup methods.
Test that operations respect timeout constraints and assert elapsed time is within tolerance.
Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Use predefined JSON flows and utility functions for flow testing (e.g., 'create_flow', 'build_flow', 'get_build_events', 'consume_and_assert_stream').
Test components that need external APIs with proper pytest markers such as '@pytest.mark.api_key_required' and '@pytest.mark.no_blockbuster'.
Use 'MockLanguageModel' for testing language model components without external API calls.
Use 'anyio' and 'aiofiles' for async file operations in tests.
Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Test component configuration updates by asserting changes in build config dictionaries.
Test real-time event streaming endpoints by consuming NDJSON event streams and validating event structure.
Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Test webhook endpoints by posting payloads and asserting correct processing and status codes.
Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/api/v1/test_starter_projects.py
  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/tests/unit/test_async_helpers.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
`src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx}`: Frontend test files should be ...

src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx}: Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Frontend tests should mock external dependencies and APIs appropriately.
Frontend tests should validate input/output behavior and component state changes.
Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.

📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)

List of files the instruction was applied to:

  • src/frontend/tests/core/features/tweaksTest.spec.ts
  • src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts
  • src/frontend/tests/extended/features/curlApiGeneration.spec.ts
  • src/frontend/tests/extended/features/pythonApiGeneration.spec.ts
`{uv.lock,pyproject.toml}`: Use uv (>=0.4) as the Python package manager for dependency management

{uv.lock,pyproject.toml}: Use uv (>=0.4) as the Python package manager for dependency management

📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

List of files the instruction was applied to:

  • pyproject.toml
`src/backend/tests/unit/components/**/*.py`: Mirror the component directory stru...

src/backend/tests/unit/components/**/*.py: Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping in tests for backward compatibility version testing
Create comprehensive unit tests for all new components
Use the client fixture from conftest.py for FastAPI API endpoint tests
Test authenticated FastAPI endpoints using logged_in_headers in tests

📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
`src/backend/**/*component*.py`: In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).

src/backend/**/*component*.py: In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

📄 Source: CodeRabbit Inference Engine (.cursor/rules/icons.mdc)

List of files the instruction was applied to:

  • src/backend/tests/unit/components/models/test_language_model_component.py
  • src/backend/tests/unit/components/agents/test_agent_component.py
  • src/backend/base/langflow/custom/custom_component/component.py
  • src/backend/tests/unit/custom/custom_component/test_component.py
🧠 Learnings (50)
src/frontend/package.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*} : Frontend configuration files such as 'package.json', 'tsconfig.json', 'tailwind.config.*', and 'vite.config.*' must be present and properly maintained in 'src/frontend/'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*FlowGraph.tsx : Use React Flow for flow graph visualization components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/frontend/src/style/applies.css (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx,js,jsx,css,scss} : Use Tailwind CSS for styling all frontend components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*.{ts,tsx} : All components must be styled using Tailwind CSS utility classes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: All UI components must be styled using Tailwind CSS utility classes, with support for different variants and sizes implemented via conditional className logic.
src/backend/base/langflow/components/crewai/hierarchical_task.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/frontend/src/modals/apiModal/index.tsx (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/{components,hooks}/**/*.{ts,tsx} : Implement dark mode support in components and hooks where needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*.{ts,tsx} : All components must be styled using Tailwind CSS utility classes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx,js,jsx,css,scss} : Use Tailwind CSS for styling all frontend components.
src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/tests/unit/api/v1/test_starter_projects.py (11)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test webhook endpoints by posting payloads and asserting correct processing and status codes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Use the 'client' fixture (an async httpx.AsyncClient) for API tests, as defined in 'src/backend/tests/conftest.py'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use the client fixture from conftest.py for FastAPI API endpoint tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Test authenticated FastAPI endpoints using logged_in_headers in tests
src/frontend/tests/core/features/tweaksTest.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.{test,spec}.{ts,tsx} : Integration tests must be written for page-level components and flows.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.test.{ts,tsx} : All frontend components must have associated tests using React Testing Library.
src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.{test,spec}.{ts,tsx} : Integration tests must be written for page-level components and flows.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
src/backend/base/langflow/base/models/model.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
pyproject.toml (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/frontend/tests/extended/features/curlApiGeneration.spec.ts (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
src/backend/base/langflow/base/agents/crewai/tasks.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/base/pyproject.toml (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
src/frontend/tests/extended/features/pythonApiGeneration.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Create comprehensive unit tests for all new components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
src/backend/tests/unit/components/models/test_language_model_component.py (8)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Provide file_names_mapping in tests for backward compatibility version testing
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
src/backend/base/langflow/components/crewai/crewai.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/utils/async_helpers.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'anyio' and 'aiofiles' for async file operations in tests.
src/backend/tests/unit/components/agents/test_agent_component.py (9)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Provide file_names_mapping in tests for backward compatibility version testing
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Test files should use the same filename as the component with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
src/backend/base/langflow/base/models/openai_constants.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Use clear, recognizable, and consistent icon names for both backend and frontend (e.g., 'AstraDB', 'Postgres', 'OpenAI').
src/backend/base/langflow/components/crewai/sequential_task.py (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/backend/base/langflow/components/crewai/sequential_crew.py (6)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
src/backend/base/langflow/components/crewai/sequential_task_agent.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
src/backend/base/langflow/components/crewai/hierarchical_crew.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/backend/**/components/**/*.py : In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/backend/**/*component*.py : In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/tests/unit/test_async_helpers.py (14)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use '@pytest.mark.asyncio' for async test functions.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'anyio' and 'aiofiles' for async file operations in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test queue operations in async tests using 'asyncio.Queue' and non-blocking put/get methods.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Create comprehensive unit tests for all new components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Context variables may not propagate correctly in asyncio.to_thread; test both patterns
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test that operations respect timeout constraints and assert elapsed time is within tolerance.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Each test should ensure proper resource cleanup, especially in async fixtures using 'try/finally' and cleanup methods.
src/backend/base/langflow/components/docling/export_docling_document.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/base/langflow/services/auth/utils.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Test authenticated FastAPI endpoints using logged_in_headers in tests
src/backend/base/langflow/custom/custom_component/component.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/tests/unit/custom/custom_component/test_component.py (11)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test components that need external APIs with proper pytest markers such as '@pytest.mark.api_key_required' and '@pytest.mark.no_blockbuster'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test component initialization and configuration.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Skip client creation in tests by marking them with '@pytest.mark.noclient' when the 'client' fixture is not needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Test component integration within flows using create_flow, build_flow, and get_build_events utilities
src/backend/base/langflow/components/openai/openai_chat_model.py (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/backend/base/langflow/components/models/language_model.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (6)

undefined

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (7)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (5)

undefined

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests//*.py,tests//*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (9)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2)

undefined

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (7)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (4)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (13)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx} : Use React 18 with TypeScript for all UI components and frontend logic.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/{components,hooks}/**/*.{ts,tsx} : Implement dark mode support in components and hooks where needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: Export custom icon components in React using React.forwardRef to ensure proper ref forwarding and compatibility with parent components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: Custom React Flow node types should be implemented as memoized components, using Handle components for connection points and supporting optional icons and labels.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*FlowGraph.tsx : Use React Flow for flow graph visualization components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/icons/**/*.{ts,tsx,js,jsx} : Use Lucide React for icons in frontend components.
Learnt from: dolfim-ibm
PR: langflow-ai/langflow#8394
File: src/frontend/src/icons/Docling/index.tsx:4-6
Timestamp: 2025-06-16T11:14:04.200Z
Learning: The Langflow codebase consistently uses `React.PropsWithChildren<{}>` as the prop type for all icon components using forwardRef, rather than `React.SVGProps<SVGSVGElement>`. This is an established pattern across hundreds of icon files in src/frontend/src/icons/.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: Custom SVG icon components in React should always support both light and dark mode by accepting an 'isdark' prop and adjusting colors accordingly.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/index.tsx : Create an `index.tsx` in your icon directory that exports your icon using `forwardRef` and passes the `isdark` prop.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/*.jsx : Always support both light and dark mode for custom icons by using the `isdark` prop in your SVG component.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/* : Create a new directory for your icon in `src/frontend/src/icons/YourIconName/` and add your SVG as a React component (e.g., `YourIconName.jsx`) that uses the `isdark` prop to support both light and dark mode.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: Error handling for API calls in React should be abstracted into custom hooks (e.g., useApi), which manage loading and error state and expose an execute function for invoking the API.
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (5)

undefined

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (7)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (7)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (6)

undefined

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (7)

undefined

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)

undefined

<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>

<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (8)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/base/agents/crewai/crew.py (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (3)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
🧬 Code Graph Analysis (7)
src/backend/base/langflow/base/models/model.py (1)
src/backend/base/langflow/graph/vertex/vertex_types.py (1)
  • stream (372-454)
src/backend/base/langflow/components/crewai/crewai.py (1)
src/backend/base/langflow/base/agents/crewai/crew.py (1)
  • build_output (221-231)
src/backend/base/langflow/components/crewai/sequential_task.py (1)
src/backend/base/langflow/base/agents/crewai/tasks.py (1)
  • SequentialTask (7-8)
src/backend/base/langflow/services/auth/utils.py (2)
src/backend/base/langflow/services/database/models/user/model.py (1)
  • UserRead (62-72)
src/backend/base/langflow/services/database/models/api_key/crud.py (1)
  • check_key (52-61)
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)
src/frontend/src/customization/utils/custom-get-host-protocol.ts (1)
  • customGetHostProtocol (1-6)
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (6)
src/frontend/src/utils/utils.ts (1)
  • getOS (888-904)
src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx (1)
  • getNewPythonApiCode (3-57)
src/frontend/src/modals/apiModal/utils/get-js-api-code.tsx (1)
  • getNewJsApiCode (13-58)
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)
  • getNewCurlCode (41-117)
src/frontend/src/components/ui/button.tsx (1)
  • Button (133-133)
src/frontend/src/components/ui/tabs-button.tsx (3)
  • Tabs (54-54)
  • TabsList (54-54)
  • TabsTrigger (54-54)
src/backend/base/langflow/base/agents/crewai/crew.py (3)
src/backend/base/langflow/schema/data.py (1)
  • Data (23-277)
src/backend/base/langflow/components/crewai/sequential_crew.py (3)
  • get_tasks_and_agents (23-29)
  • build_crew (31-52)
  • agents (19-21)
src/backend/base/langflow/components/crewai/hierarchical_crew.py (1)
  • build_crew (22-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (60)
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: autofix
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Update Starter Projects
  • GitHub Check: Run Ruff Check and Format
  • GitHub Check: Call Docker Build Workflow for Langflow Base / build
🔇 Additional comments (82)
src/frontend/src/modals/apiModal/index.tsx (1)

138-138: LGTM! Good UX improvement.

The addition of select-none prevents unwanted text selection when users interact with the tweaks button, improving the overall user experience.

src/frontend/src/style/applies.css (1)

908-908: LGTM! CSS simplification aligns with tab UI updates.

Removing the negative top margin and bottom padding simplifies the styling while maintaining the essential layout properties. This change appears coordinated with the broader API modal tab refactoring.

src/frontend/tests/extended/features/pythonApiGeneration.spec.ts (1)

14-14: LGTM! Improved test selector reliability.

Switching from role-based to test ID selector makes the test more stable and consistent with the updated UI. Test IDs are less fragile than role-based selectors and improve test maintainability.

src/frontend/tests/extended/features/curlApiGeneration.spec.ts (1)

14-14: LGTM! Consistent test selector improvement.

The change to use test ID selector aligns with the standardization effort across test files and improves test reliability. Good consistency with the Python tab test changes.

src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts (1)

102-102: LGTM! Completes consistent test selector standardization.

This change completes the coordinated effort to standardize tab selection across all test files using test IDs instead of role-based selectors, improving overall test suite reliability.

src/frontend/tests/core/features/tweaksTest.spec.ts (1)

15-15: LGTM! Improved test stability with test ID selectors.

The change from role-based selectors to test ID selectors is a good practice that makes tests more resilient to UI changes.

Also applies to: 38-38

src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (1)

1-222: Well-implemented platform-specific cURL generation with improved UI!

The refactoring successfully introduces platform-specific cURL command generation with an intuitive UI. The string-based tab selection and test ID attributes improve maintainability and test stability.

src/backend/base/pyproject.toml (1)

3-3: Version bump looks good.

The version update from 0.4.3 to 0.5.0 follows semantic versioning and aligns with the coordinated release across the project components.

src/frontend/package.json (1)

3-3: Version synchronization looks good.

The version update from 1.4.3 to 1.5.0 properly synchronizes with the backend version updates and follows semantic versioning conventions.

src/backend/tests/unit/api/v1/test_starter_projects.py (1)

9-9: Enhanced test diagnostics - good improvement.

Adding response.text as a failure message provides valuable debugging information when the assertion fails, making it easier to understand what went wrong with the API call.

.github/workflows/release.yml (1)

54-54: Proper secrets inheritance configuration.

Adding secrets: inherit enables the CI job to access repository secrets, which is essential for secure testing and deployment operations. This follows GitHub Actions best practices.

src/backend/base/langflow/base/data/docling_utils.py (1)

29-33: Excellent error message enhancement.

The improved error message provides clear, actionable guidance to users by explaining the likely cause (input not being a DoclingDocument) and suggesting a solution (using the Docling component). This significantly improves the debugging experience.

src/backend/base/langflow/utils/constants.py (2)

12-12: LGTM - Adding gpt-4o-mini to chat models.

The addition of "gpt-4o-mini" to the chat models list is correct and follows the established pattern.


21-30: REASONING_OPENAI_MODELS list validated

All four models—o3, o3-pro, o4-mini, and o4-mini-high—are official OpenAI reasoning models released in 2025 and available via the Chat Completions and Responses APIs. No changes to the constant are required.

src/backend/base/langflow/utils/async_helpers.py (1)

22-42: Well-designed solution for event loop conflicts.

The updated implementation properly handles the case where an event loop is already running by creating a new thread with its own event loop. This prevents the RuntimeError that would occur when calling run_until_complete on an active loop.

The approach is sound:

  • Proper exception handling for detecting running loops
  • Clean event loop lifecycle management with try/finally
  • Appropriate use of ThreadPoolExecutor for thread management
src/backend/base/langflow/components/docling/export_docling_document.py (3)

1-1: LGTM - Adding required import for type hints.

The Any import is needed for the type annotation in the update_build_config method.


32-32: Good UI improvement with real-time refresh.

Adding real_time_refresh=True enables immediate UI updates when the export format changes, improving user experience.


72-86: Well-implemented dynamic UI configuration.

The update_build_config method properly implements dynamic UI behavior:

  • Markdown format shows relevant markdown-specific fields
  • HTML format shows only image_mode (no markdown placeholders)
  • Plaintext/DocTags hide all format-specific options

This follows good UI principles by showing only relevant fields based on context.

src/backend/tests/unit/test_async_helpers.py (1)

1-196: Excellent comprehensive test suite for async helpers.

This test suite thoroughly validates the updated run_until_complete function with comprehensive coverage:

Strong points:

  • Tests both sync and async execution paths as required by coding guidelines
  • Covers edge cases like thread isolation, concurrent execution, and timeout handling
  • Proper exception propagation testing
  • Performance impact validation
  • Well-structured with clear docstrings explaining test purposes

Test coverage includes:

  • Basic functionality with/without event loops
  • Exception handling across thread boundaries
  • Thread-local data isolation
  • Concurrent execution scenarios
  • Nested async operations
  • Timeout behavior
  • Performance constraints

The tests align perfectly with the coding guidelines for comprehensive component testing and proper async test patterns.

src/backend/base/langflow/components/crewai/hierarchical_task.py (1)

10-10: LGTM - Adding legacy status marker.

The addition of legacy = True appropriately marks this component as legacy, which aligns with the broader CrewAI component refactoring mentioned in the AI summary for safer optional dependency handling.

src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py (1)

4-4: LGTM: Proper model categorization alignment.

The import and usage of OPENAI_CHAT_MODEL_NAMES instead of OPENAI_MODEL_NAMES correctly aligns with the broader OpenAI model categorization changes, ensuring this component specifically uses chat models as intended.

Also applies to: 46-47

src/backend/base/langflow/components/data/url.py (3)

21-23: LGTM: Regex consolidation maintains functionality.

The URL regex pattern consolidation to a single raw string improves readability without altering the validation logic.


245-245: Reduced logging verbosity for URL processing.

Changed from logger.info to logger.debug for URL listing and document count messages, reducing default log verbosity. This is appropriate for operational information that's useful for debugging but not essential for normal operation.

Also applies to: 252-252, 262-262


129-129: No hardcoded User-Agent detected; dynamic header remains

All usages of

get_settings_service().settings.user_agent

are still present in:

  • src/backend/base/langflow/components/data/api_request.py
  • src/backend/base/langflow/components/data/url.py
  • src/backend/base/langflow/components/data/web_search.py

No occurrences of a fixed "langflow" User-Agent were introduced. The original concern can be disregarded.

Likely an incorrect or invalid review comment.

src/backend/base/langflow/utils/util.py (1)

385-385: LGTM: Proper integration of reasoning models.

Adding "ReasoningOpenAI": constants.REASONING_OPENAI_MODELS to the options_map enables UI components to properly populate model options for reasoning models, maintaining consistency with the existing pattern for other OpenAI model categories.

src/backend/base/langflow/base/agents/crewai/tasks.py (1)

1-4: LGTM: Proper optional dependency handling.

The try-except import pattern with fallback to Task = object ensures the module loads successfully even when crewai is not installed, allowing graceful degradation. This aligns with the broader changes to make crewai an optional dependency.

pyproject.toml (2)

3-3: LGTM: Coordinated version bump.

The version updates from 1.4.3 to 1.5.0 for the main package and 0.4.3 to 0.5.0 for langflow-base indicate a coordinated minor release with aligned versioning.

Also applies to: 20-20


107-107: LGTM: Consistent optional dependency approach.

Commenting out the crewai dependency while updating the version requirement to >=0.126.0 aligns with the codebase changes that make crewai an optional dependency through graceful import fallbacks.

src/backend/tests/unit/components/agents/test_agent_component.py (1)

11-11: LGTM: Model constants updated correctly.

The import and test usage have been properly updated to use the new OPENAI_CHAT_MODEL_NAMES constant, maintaining the same test coverage while aligning with the broader refactoring that distinguishes between OpenAI chat and reasoning models.

Also applies to: 145-145

src/backend/base/langflow/components/crewai/sequential_task.py (2)

10-10: LGTM: Legacy marking added consistently.

The legacy = True attribute aligns with the broader pattern of marking CrewAI components as legacy, as described in the AI summary.


69-69: LGTM: Variable shadowing resolved.

Changing the variable name from task to task_item in the list comprehension is a good improvement that avoids shadowing the outer task variable (the SequentialTask instance being created on line 59).

src/backend/base/langflow/base/models/model.py (2)

89-89: LGTM: Code simplification improvement.

Directly passing instance attributes to get_chat_result removes unnecessary intermediate variables and makes the code more concise while maintaining readability.


176-178: LGTM: Valuable documentation added.

The comment about NVIDIA reasoning models using detailed thinking provides important context for understanding the conditional logic that prepends the DETAILED_THINKING_PREFIX when the detailed_thinking attribute is set.

src/backend/tests/unit/components/models/test_language_model_component.py (1)

9-9: LGTM: Test updated for model constant refactoring.

The import and test assertions have been properly updated to use the new separate OPENAI_CHAT_MODEL_NAMES and OPENAI_REASONING_MODEL_NAMES constants. The test now correctly validates that the component handles both chat and reasoning models, and the default value appropriately uses the first chat model.

Also applies to: 69-70

src/backend/base/langflow/custom/custom_component/component.py (1)

903-905: LGTM! Excellent defensive programming improvement.

The change from direct dictionary access to .get("return") prevents KeyError exceptions when methods lack return type annotations. The explicit None check with an empty list fallback is appropriate and aligns with the broader error handling improvements in this PR.

src/backend/tests/unit/custom/custom_component/test_component.py (2)

12-18: Good implementation of optional dependency handling.

The conditional import pattern with a boolean flag is clean and follows best practices for handling optional dependencies in tests.


28-28: Proper test skipping for missing dependencies.

Using pytest.mark.skipif with a clear reason message is the standard approach for handling optional dependencies in tests, following the coding guidelines for external API tests.

src/backend/base/langflow/components/crewai/sequential_task_agent.py (2)

11-11: Consistent legacy marking across CrewAI components.

The legacy = True attribute is appropriately added, maintaining consistency with other CrewAI components in this refactoring.


107-111: Excellent deferred import pattern with clear error messaging.

Moving the CrewAI imports inside the method with try-except handling is the right approach for optional dependencies. The error message provides clear installation instructions using the project's preferred package manager.

src/backend/base/langflow/components/crewai/crewai.py (2)

23-23: Consistent legacy marking maintained across CrewAI components.


82-87: Proper implementation of deferred imports for optional dependencies.

The try-except block with clear installation instructions follows the established pattern. Removing the return type annotation is correct since Agent is no longer available at module level.

src/backend/base/langflow/components/crewai/hierarchical_crew.py (2)

12-12: Consistent legacy marking applied across CrewAI components.


22-27: Well-implemented deferred import pattern for multiple CrewAI classes.

The try-except block properly handles the import of both Crew and Process classes, with a clear error message guiding users to install the required dependency. Removing the return type annotation is the correct approach when the type is no longer available at module level.

src/backend/base/langflow/base/models/openai_constants.py (4)

20-26: LGTM: New reasoning models added correctly.

The addition of the new reasoning models (o1-mini, o1-pro, o3-mini, o3, o3-pro, o4-mini, o4-mini-high) follows the established pattern and correctly uses the reasoning=True flag to categorize them appropriately.


37-43: LGTM: Search model addition is well-formatted.

The new gpt-4o-search-preview model entry is correctly configured with the appropriate flags (search=True, preview=True, tool_calling=True).


61-67: Excellent refactoring of filtering logic.

The improved filtering logic correctly excludes not_supported models before filtering out reasoning and search models. This ensures that unsupported models are properly excluded from the chat model list, which is a more robust approach than the previous implementation.


90-90: LGTM: Backward compatibility alias updated correctly.

The MODEL_NAMES alias is correctly updated to point to the new OPENAI_CHAT_MODEL_NAMES constant, maintaining backward compatibility while aligning with the refactoring.

src/backend/base/langflow/components/crewai/sequential_crew.py (3)

11-11: LGTM: Legacy flag addition is appropriate.

The legacy = True attribute is correctly added, indicating this component follows the legacy pattern and aligns with the broader refactoring across CrewAI components.


19-19: LGTM: Return type generalization is necessary.

The return type annotations are appropriately generalized from specific Agent and Task types to generic list and tuple[list, list] types. This change is necessary since the specific CrewAI types are no longer imported at module level.

Also applies to: 23-23


32-36: Excellent improvement in optional dependency handling.

The move of crewai imports inside the method with proper try-except error handling is a great improvement. The error message is clear and actionable, providing the exact command (uv pip install crewai) users need to run to resolve the dependency issue.

src/backend/base/langflow/services/auth/utils.py (4)

62-66: LGTM: Improved control flow for AUTO_LOGIN handling.

The refactored control flow correctly handles the case when AUTO_LOGIN is enabled and no API key is provided. The immediate return for skip_auth_auto_login and the clear error handling for the alternative case improve code readability and maintainability.


67-67: LGTM: Simplified API key validation logic.

The consolidation of API key checking into a single call using query_param or header_param is cleaner and more maintainable than the previous implementation.


76-76: LGTM: Consistent API key checking logic.

The unified approach to checking API keys using the same query_param or header_param pattern as the AUTO_LOGIN flow above creates consistency and reduces code duplication.


83-86: LGTM: Improved flow control and validation.

The positioning of the validation logic and the clean return statement improve the overall flow of the function. The isinstance(result, User) check ensures type safety before conversion to UserRead.

src/backend/base/langflow/initial_setup/starter_projects/Simple Agent.json (1)

1596-1596: URLComponent review – logging OK, regex present, User-Agent header still dynamic

  • URL_REGEX is defined at line 20 and used in validate_url (line 182); consolidation appears applied.
  • Logging calls use logger.debug (lines 245, 252, 262) with no remaining info level logs.
  • The User-Agent header remains dynamic via get_settings_service().settings.user_agent (line 129), not fixed to "langflow" as mentioned in the AI summary.

Please either:

  • Update the code to set the User-Agent header to the fixed value "langflow"
    OR
  • Correct the AI summary/review comment to reflect that the header is still dynamic.
⛔ Skipped due to learnings
Learnt from: edwinjosechittilappilly
PR: langflow-ai/langflow#8504
File: src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json:391-393
Timestamp: 2025-06-12T15:25:01.072Z
Learning: The repository owner prefers CodeRabbit not to review or comment on JSON files because they are autogenerated.
src/backend/base/langflow/components/openai/openai_chat_model.py (7)

8-8: LGTM - Import statement updated for model categorization.

The import change from OPENAI_MODEL_NAMES to OPENAI_CHAT_MODEL_NAMES aligns with the broader refactoring to distinguish between chat and reasoning models.


48-49: LGTM - Model options expanded with appropriate default.

The combination of OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES provides comprehensive model selection, and defaulting to OPENAI_CHAT_MODEL_NAMES[0] is sensible as chat models are more commonly used.


100-100: LGTM - Debug logging enhances traceability.

The debug logging provides useful insight into which model is being executed, which aids in troubleshooting and monitoring.


111-112: LGTM - Clear documentation of reasoning model limitations.

The TODO comment provides valuable context for future development, and the explicit list of unsupported parameters for reasoning models is clear and accurate.


117-119: LGTM - Proper handling of reasoning model constraints.

The logic correctly excludes temperature and seed parameters for reasoning models with informative debug logging explaining why these parameters are ignored.


150-152: LGTM - Appropriate UI handling for o1 model constraints.

The logic correctly hides the system_message input for o1 models, which is appropriate since these reasoning models don't support system messages. The prefix check ensures all o1 variants are handled correctly.


153-157: LGTM - Correct UI field visibility for chat models.

The logic properly shows all parameter inputs (temperature, seed, system_message) for chat models, as these models support all these configuration options.

src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)

1082-1082: Confirm temperature=None support in ChatOpenAI

We weren’t able to locate the ChatOpenAI.__init__ signature in the local codebase to verify that temperature=None is accepted. Please:

  • Check your lockfile (e.g., poetry.lock or requirements.txt) for the pinned langchain-openai version.
  • Confirm that its ChatOpenAI constructor allows temperature=None.
  • If it doesn’t, update the reasoning-model branch in build_model (around Financial Report Parser.json:1082) to use a numeric default (e.g., 0) instead of None.
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1)

1030-1038: Verify that ChatGoogleGenerativeAI supports the streaming parameter.

Some released versions of langchain_google_genai do not yet implement streaming for Gemini; passing streaming= may raise TypeError: __init__() got an unexpected keyword argument.
Please confirm the package version in poetry.lock / requirements.txt or gate the argument behind a feature check.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

2275-2366: Passing temperature=None may break ChatOpenAI

ChatOpenAI expects temperature to be a float (0-2).
Setting it to None for reasoning models can raise a type error in the LangChain wrapper. Prefer omitting the kwarg altogether:

if model_name in OPENAI_REASONING_MODEL_NAMES:
-    temperature = None
+    temperature = 0  # or drop the argument entirely via **kwargs filtering

Or build kwargs dynamically:

params = dict(model_name=model_name, streaming=stream, openai_api_key=self.api_key)
if temperature is not None:
    params["temperature"] = temperature
return ChatOpenAI(**params)
src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)

4586-4586: Verify streaming parameter support in ChatGoogleGenerativeAI

We attempted to inspect the constructor in this sandbox, but langchain_google_genai isn’t installed here. Please confirm in your local environment that ChatGoogleGenerativeAI accepts the streaming kwarg. If it doesn’t, update the Google branch in build_model to only pass streaming when supported—for example:

from inspect import signature

# inside build_model(), when provider == "Google":
params = {"model": model_name, "temperature": temperature}
if "streaming" in signature(ChatGoogleGenerativeAI).parameters:
    params["streaming"] = stream
return ChatGoogleGenerativeAI(**params, google_api_key=self.api_key)
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)

1410-1440: o1-prefix check seems stale / never triggered

update_build_config hides system_message when model_name.startswith("o1"), yet the canonical lists now ship gpt-4o*, gpt-4.*, etc. Unless there are still internal “o1” models in use, this branch will never execute – the UI control will stay visible and may confuse users about unsupported behaviour.

Verify the intended prefix or replace with a safer capability flag (e.g. a dedicated SUPPORTS_SYSTEM_MESSAGE set).

src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)

2644-2660: Verify streaming parameter for ChatGoogleGenerativeAI

ChatGoogleGenerativeAI (langchain-google-genai) historically uses stream rather than streaming. Passing an unknown kwarg will raise TypeError.

Please double-check the current signature and, if necessary, adjust:

-            return ChatGoogleGenerativeAI(
-                model=model_name,
-                temperature=temperature,
-                streaming=stream,
-                google_api_key=self.api_key,
-            )
+            return ChatGoogleGenerativeAI(
+                model=model_name,
+                temperature=temperature,
+                stream=stream,
+                google_api_key=self.api_key,
+            )

Would you run a quick grep or unit test against the current dependency version to confirm which keyword is accepted?

src/backend/base/langflow/components/models/language_model.py (5)

10-10: LGTM! Import updated to distinguish between chat and reasoning models.

The import change from OPENAI_MODEL_NAMES to separate OPENAI_CHAT_MODEL_NAMES and OPENAI_REASONING_MODEL_NAMES provides better model categorization and aligns with the broader OpenAI model support updates.


39-42: LGTM! Model dropdown properly combines both model types.

The model dropdown now correctly combines both chat and reasoning models, defaults to the first chat model, and includes real-time refresh for dynamic UI updates.


61-61: LGTM! System message input made more accessible.

Changing advanced=False makes the system message input visible by default, improving user experience for this commonly used parameter.


127-128: LGTM! Build config update consistent with model separation.

The update to use both model lists in the build configuration is consistent with the earlier changes and maintains the default selection of the first chat model.


91-93: Please verify ChatOpenAI handles None temperature without errors

I wasn’t able to locate the ChatOpenAI constructor in the repo—ensure that passing temperature=None into langchain.chat_models.ChatOpenAI (or your local wrapper) won’t trigger a type/validation error. You may need to:

  • Inspect the __init__ signature of ChatOpenAI in your LangChain version
  • Or add a quick unit test instantiating it with temperature=None

This affects:

  • src/backend/base/langflow/components/models/language_model.py (around lines 91–93)

If None isn’t accepted, consider filtering it out or supplying a fallback value.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

2030-2080: temperature=None might be invalid for ChatOpenAI

build_model() forces temperature = None for reasoning models and then passes it straight to ChatOpenAI.
Up-to-date langchain-openai versions still type‐hint temperature as float, and runtime validation rejects None. This will raise a ValidationError (pydantic) or TypeError, breaking flow execution as soon as a reasoning model is selected.

-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
+            if model_name in OPENAI_REASONING_MODEL_NAMES:
+                # Reasoning models ignore temperature; keep default instead of None
+                temperature = 0.0

Please confirm against the exact langchain-openai version shipped in requirements.txt.
If that version already accepts None, ignore; otherwise adjust as above or omit the arg entirely when None.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)

995-995: Guard against stale model selections when provider changes

update_build_config resets model_name.value to the first option of the new provider, but any downstream nodes or cached configs may still hold the old (now invalid) value.
Consider returning a companion list of invalidated fields or emitting a validation warning so the frontend knows to re-sync dependent values.

src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)

1931-1931: LGTM! Proper handling of OpenAI reasoning models implemented.

The embedded LanguageModelComponent code correctly implements the distinction between OpenAI chat and reasoning models. Key improvements:

  1. Correct model imports: Uses OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES for comprehensive model options
  2. Temperature handling: Properly sets temperature = None for reasoning models since they don't support temperature yet
  3. UI adaptations: Dynamically hides system_message for o1 models which don't support system messages
  4. Consistent patterns: Aligns with the broader refactoring described in the AI summary

The logic correctly identifies reasoning models using model_name in OPENAI_REASONING_MODEL_NAMES and the UI updates appropriately respond to model selection changes.

src/backend/base/langflow/base/agents/crewai/crew.py (5)

44-44: LGTM! Proper deferred import pattern implemented.

The changes to convert_llm function correctly implement the deferred import pattern:

  1. Parameter generalization: Changed from specific type to Any to avoid import dependencies
  2. Runtime import: Moved from crewai import LLM inside the function with proper error handling
  3. Clear error message: Provides actionable guidance when CrewAI is not installed

This aligns with the same pattern used in other CrewAI components shown in the relevant code snippets.

Also applies to: 54-58


114-118: LGTM! Consistent deferred import for tools conversion.

The convert_tools function properly implements the same deferred import pattern:

  1. Runtime import: Moves from crewai.tools.base_tool import Tool inside the function
  2. Error handling: Consistent error message format with installation instructions
  3. Graceful degradation: Function can be defined without requiring CrewAI at module load time

153-153: LGTM! Type annotation removal improves dependency management.

The removal of explicit type annotations on class methods is appropriate for this refactoring:

  1. Reduced dependencies: Eliminates need to import CrewAI types at module level
  2. Maintained functionality: Methods still work correctly with duck typing
  3. Consistent pattern: Aligns with similar changes across other CrewAI components

This change supports the goal of making CrewAI an optional dependency while maintaining backward compatibility.

Also applies to: 155-155, 158-158, 171-171, 179-179


186-190: LGTM! Proper error handling for task callback.

The get_task_callback method correctly implements deferred import:

  1. Runtime import: Moves from crewai.task import TaskOutput inside the method
  2. Clear error message: Provides installation instructions when CrewAI is missing
  3. Functional preservation: Callback functionality remains intact when dependencies are available

201-206: LGTM! Comprehensive deferred import for step callback.

The get_step_callback method properly implements the deferred import pattern:

  1. Runtime import: Moves from langchain_core.agents import AgentFinish inside the method
  2. Dual dependency handling: Addresses both CrewAI and langchain_core dependencies
  3. Parameter generalization: Removes specific type annotation for agent_output parameter
  4. Consistent error messaging: Provides clear installation instructions

This completes the pattern of making all CrewAI-related functionality work with optional dependencies.

Also applies to: 207-207

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)

1190-1195: Depth slider now allows up to 10 but keeps step_type="float"

step_type is "float" while step is 1. Using a float step for an integer-only domain is misleading and may break validation in some front-end widgets.

-  "step": 1,
-  "step_type": "float"
+  "step": 1,
+  "step_type": "int"

Also consider adding a short warning in the info field about the exponential crawl cost beyond depth 5.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

system_message may remain hidden after provider switches away from OpenAI

update_build_config hides system_message when an OpenAI model starting with o1 is chosen, but the provider branch doesn’t explicitly reset the flag when the user later changes provider (e.g., to Anthropic).
Because UI updates fire per-field, the previous hidden state can persist, making the field permanently invisible.

Add a reset in the "provider" section:

elif field_name == "provider":
     ...
+    # always ensure system_message is visible when leaving OpenAI/o1 context
+    if "system_message" in build_config:
+        build_config["system_message"]["show"] = True
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:
if field_name == "provider":
if field_value == "OpenAI":
build_config["model_name"]["options"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES
build_config["model_name"]["value"] = OPENAI_CHAT_MODEL_NAMES[0]
build_config["api_key"]["display_name"] = "OpenAI API Key"
elif field_value == "Anthropic":
build_config["model_name"]["options"] = ANTHROPIC_MODELS
build_config["model_name"]["value"] = ANTHROPIC_MODELS[0]
build_config["api_key"]["display_name"] = "Anthropic API Key"
elif field_value == "Google":
build_config["model_name"]["options"] = GOOGLE_GENERATIVE_AI_MODELS
build_config["model_name"]["value"] = GOOGLE_GENERATIVE_AI_MODELS[0]
build_config["api_key"]["display_name"] = "Google API Key"
# always ensure system_message is visible when leaving OpenAI/o1 context
if "system_message" in build_config:
build_config["system_message"]["show"] = True
elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI":
# Hide system_message for o1 models - currently unsupported
if "system_message" in build_config:
build_config["system_message"]["show"] = False
elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config:
build_config["system_message"]["show"] = True
return build_config
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Financial Report
Parser.json at line 1082, the update_build_config method hides the
system_message field when an OpenAI model starting with "o1" is selected but
does not reset this visibility when the provider changes to Anthropic or Google.
To fix this, add logic in the "provider" field_name branch to explicitly set
build_config["system_message"]["show"] to True when the provider switches away
from OpenAI, ensuring the system_message field becomes visible again.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Passing temperature=None can break ChatOpenAI initialisation

build_model() sets temperature = None for reasoning models and still passes it:

return ChatOpenAI(
    model_name=model_name,
    temperature=temperature,   # may be None
    streaming=stream,
    openai_api_key=self.api_key,
)

langchain_openai.ChatOpenAI expects temperature: float (no Optional[float] in the current stub/impl).
If the runtime implementation rejects None, the component will raise at execution time.

-            if model_name in OPENAI_REASONING_MODEL_NAMES:
-                # reasoning models do not support temperature (yet)
-                temperature = None
-
-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            kwargs: dict[str, Any] = {
+                "model_name": model_name,
+                "streaming": stream,
+                "openai_api_key": self.api_key,
+            }
+            # Reasoning models ignore temperature – omit the field entirely
+            if model_name not in OPENAI_REASONING_MODEL_NAMES:
+                kwargs["temperature"] = temperature
+
+            return ChatOpenAI(**kwargs)

This avoids a hard failure while still honouring temperature for chat-capable models.

🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Research Translation
Loop.json at line 1405, the build_model method passes temperature=None to
ChatOpenAI for reasoning models, but ChatOpenAI expects a float and may fail if
given None. To fix this, modify the code to only include the temperature
argument when it is not None, avoiding passing temperature=None to ChatOpenAI
while preserving temperature usage for chat-capable models.

🛠️ Refactor suggestion

update_build_config can leave system_message permanently hidden when switching providers

Hiding/showing the system_message input is only triggered on model_name changes:

elif field_name == "model_name" and field_value.startswith("o1") ...

If a user selects an o1 model (hiding the field) and then switches the provider to Anthropic/Google, the flag is never reset, so system_message stays invisible even though those providers support it.

Quick fix:

if field_name == "provider":
    ...
+    # Always re-enable system_message when leaving OpenAI or changing provider
+    if "system_message" in build_config:
+        build_config["system_message"]["show"] = True

(You may also want to hide the temperature slider for reasoning models in the same callback to keep the UI consistent.)

🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Research Translation
Loop.json at line 1405, the update_build_config method only toggles the
visibility of the system_message input when the model_name changes, causing it
to remain hidden if the provider changes after selecting an o1 model. To fix
this, add logic in the update_build_config method to reset the system_message
visibility appropriately when the provider changes, ensuring it is shown for
Anthropic and Google providers. Additionally, consider adding similar logic to
hide or show the temperature slider for reasoning models to maintain UI
consistency.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

o1 prefix heuristic is brittle

Hiding system_message only when model_name.startswith("o1") risks missing other models (e.g., gpt-4o, future o2.*) that also lack system-message support. Maintain an explicit deny-list or capability map in openai_constants instead of relying on a hard-coded prefix.

🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Vector Store
RAG.json at line 4586, the code hides the system_message input based on the
model_name starting with "o1", which is a brittle heuristic. To fix this,
replace the prefix check with a lookup against an explicit deny-list or
capability map defined in openai_constants that lists models without
system_message support. Update the condition to check if the model_name is in
that deny-list instead of using startswith("o1").

⚠️ Potential issue

Avoid passing temperature=None to ChatOpenAI

langchain_openai.ChatOpenAI expects temperature to be a float in [0,2]. Passing None can raise a pydantic ValidationError at runtime. Skip the parameter when it’s not applicable instead of passing None.

-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            openai_kwargs = {
+                "model_name": model_name,
+                "streaming": stream,
+                "openai_api_key": self.api_key,
+}
+            if temperature is not None:
+                openai_kwargs["temperature"] = temperature
+            return ChatOpenAI(**openai_kwargs)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
if provider == "OpenAI":
if not self.api_key:
msg = "OpenAI API key is required when using OpenAI provider"
raise ValueError(msg)
if model_name in OPENAI_REASONING_MODEL_NAMES:
# reasoning models do not support temperature (yet)
temperature = None
- return ChatOpenAI(
- model_name=model_name,
- temperature=temperature,
- streaming=stream,
- openai_api_key=self.api_key,
- )
+ openai_kwargs = {
+ "model_name": model_name,
+ "streaming": stream,
+ "openai_api_key": self.api_key,
+ }
+ if temperature is not None:
+ openai_kwargs["temperature"] = temperature
+ return ChatOpenAI(**openai_kwargs)
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Vector Store
RAG.json at line 4586, the build_model method passes temperature=None to
ChatOpenAI when using reasoning models, which causes a validation error. To fix
this, modify the code to omit the temperature parameter entirely from the
ChatOpenAI constructor when temperature is None, instead of passing
temperature=None.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Wrong keyword for ChatAnthropic instantiation

ChatAnthropic expects the argument model_name, not model. Using the wrong keyword will raise a TypeError at runtime when users switch the provider to Anthropic.

-            return ChatAnthropic(
-                model=model_name,
+            return ChatAnthropic(
+                model_name=model_name,
                 temperature=temperature,
                 streaming=stream,
                 anthropic_api_key=self.api_key,
             )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
if provider == "Anthropic":
if not self.api_key:
msg = "Anthropic API key is required when using Anthropic provider"
raise ValueError(msg)
return ChatAnthropic(
model_name=model_name,
temperature=temperature,
streaming=stream,
anthropic_api_key=self.api_key,
)
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword
Generator.json at line 968, the instantiation of ChatAnthropic incorrectly uses
the keyword argument 'model' instead of 'model_name'. To fix this, replace the
'model' keyword with 'model_name' in the ChatAnthropic constructor call to match
its expected parameters and avoid runtime TypeErrors.

💡 Verification agent

🧩 Analysis chain

Double-check that temperature=None is accepted by ChatOpenAI

When a reasoning model is selected you coerce temperature to None, yet you still pass it through to ChatOpenAI. Older versions of langchain_openai expect a real float and will raise:

TypeError: float() argument must be a string or a real number, not 'NoneType'

Please verify the current version’s signature or guard the parameter, e.g.:

kwargs = dict(model_name=model_name, streaming=stream, openai_api_key=self.api_key)
if temperature is not None:
    kwargs["temperature"] = temperature
return ChatOpenAI(**kwargs)

🌐 Web query:

Does `langchain_openai.ChatOpenAI` accept `temperature=None` without error?

💡 Result:

The behavior of passing temperature=None to langchain_openai.ChatOpenAI depends on the version of LangChain and, crucially, on the specific OpenAI model you are using.

1. Supported Values for temperature Parameter

  • Most OpenAI chat models (such as gpt-3.5-turbo and gpt-4) expect temperature to be a float between 0 and 2. Passing None typically results in no temperature parameter being sent to the OpenAI API, and the API will use its default value (usually 1 or 0.7, depending on the model)[3][10].
  • In recent versions (langchain-openai >= 0.3), the default for temperature was updated to None to avoid compatibility issues with some of the newer "o" series models, which may not accept a temperature parameter at all[2].

2. Model-specific Limitations

  • Some newer OpenAI models, such as o3-mini, explicitly do not support the temperature parameter at all. Passing it (even as None, if it results in the parameter being sent as "temperature": null) will result in an error from the API:
    BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", ... }}
    
    [2][4]
  • If langchain_openai.ChatOpenAI is implemented such that temperature=None is omitted from the payload, you will not get an error. If the parameter is included as null or None, you will get an error for models that strictly disallow it.

3. LangChain's Current Behavior (as of 2024/2025)

  • Documentation and community reports indicate that with langchain-openai ~0.3 and up, setting temperature=None is usually safe and will not raise an error for the models that require this behavior, as the parameter will simply be omitted from the API call payload[2][6].
  • However, for maximum safety and compatibility, especially with models that do not support the parameter at all, do not set the temperature parameter at all—let it default, or set it via model_kwargs only if you know the model accepts it[2][6][10].

If you are using a model like gpt-4o or gpt-3.5-turbo, passing temperature=None is very unlikely to cause an error.
If you are using a model like o3-mini or another "o" series model, passing temperature=None is acceptable with recent LangChain versions, provided the library omits the parameter in the payload[2][4].
If you pass temperature=0 or any float, but the model does not support it, an error will occur[2][4].

Summary Table:

Model temperature=None allowed? temperature float allowed? Error if param sent?
gpt-3.5-turbo Yes Yes No
gpt-4, gpt-4o Yes Yes No
o3-mini, o1 Yes (if omitted) No Yes, if parameter is sent

Best Practice:
Use temperature=None only with recent langchain-openai versions and models that require it; otherwise, omit the parameter entirely[2][6][10]. For most robust code, detect model capabilities and set parameters appropriately.

Citations:


Ensure ChatOpenAI is not called with temperature=None

Passing temperature=None can lead to malformed payloads or API errors for certain OpenAI models (especially “o-series”). Guard against this by only including the parameter when it’s a real number.

Locations to update:

  • File: src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • Method: LanguageModelComponent.build_model() under the provider == "OpenAI" branch

Suggested change:

-            return ChatOpenAI(
-                model_name=model_name,
-                temperature=temperature,
-                streaming=stream,
-                openai_api_key=self.api_key,
-            )
+            kwargs = {
+                "model_name": model_name,
+                "streaming": stream,
+                "openai_api_key": self.api_key,
+            }
+            if temperature is not None:
+                kwargs["temperature"] = temperature
+            return ChatOpenAI(**kwargs)

This ensures temperature is only sent when supported, avoiding null parameters in the API payload.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
if provider == "OpenAI":
if not self.api_key:
msg = "OpenAI API key is required when using OpenAI provider"
raise ValueError(msg)
if model_name in OPENAI_REASONING_MODEL_NAMES:
# reasoning models do not support temperature (yet)
temperature = None
kwargs = {
"model_name": model_name,
"streaming": stream,
"openai_api_key": self.api_key,
}
if temperature is not None:
kwargs["temperature"] = temperature
return ChatOpenAI(**kwargs)
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword
Generator.json at line 968 inside the LanguageModelComponent.build_model()
method under the provider == "OpenAI" branch, the temperature parameter is
passed directly even when it is None, which can cause API errors. Modify the
code to include the temperature parameter only if it is not None by
conditionally adding it to the ChatOpenAI constructor arguments, ensuring no
temperature=None is sent in the API call.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix wrong keyword for streaming in Anthropic & Google model builders

ChatAnthropic and ChatGoogleGenerativeAI expect the keyword argument stream, not streaming.
Passing the wrong kwarg will raise TypeError: __init__() got an unexpected keyword argument 'streaming' at runtime.

-            return ChatAnthropic(
-                model=model_name,
-                temperature=temperature,
-                streaming=stream,
-                anthropic_api_key=self.api_key,
-            )
+            return ChatAnthropic(
+                model=model_name,
+                temperature=temperature,
+                stream=stream,
+                anthropic_api_key=self.api_key,
+            )

 [...]

-            return ChatGoogleGenerativeAI(
-                model=model_name,
-                temperature=temperature,
-                streaming=stream,
-                google_api_key=self.api_key,
-            )
+            return ChatGoogleGenerativeAI(
+                model=model_name,
+                temperature=temperature,
+                stream=stream,
+                google_api_key=self.api_key,
+            )
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
at line 995, the ChatAnthropic and ChatGoogleGenerativeAI constructors
incorrectly use the keyword argument 'streaming' instead of 'stream'. To fix
this, replace the 'streaming' keyword argument with 'stream' in both
ChatAnthropic and ChatGoogleGenerativeAI model builder calls to avoid the
TypeError at runtime.

@ogabrielluiz ogabrielluiz changed the title feat: add platform-specific cURL generation and expand OpenAI model support chore: release 1.5.0 Jul 8, 2025
@ogabrielluiz ogabrielluiz added the fast-track Skip tests and sends PR into the merge queue label Jul 8, 2025
@github-actions github-actions bot added ignore-for-release and removed enhancement New feature or request labels Jul 8, 2025
@ogabrielluiz ogabrielluiz enabled auto-merge July 8, 2025 12:51
@ogabrielluiz ogabrielluiz disabled auto-merge July 8, 2025 12:52
codeflash-ai bot added a commit that referenced this pull request Jul 8, 2025
Here is a rewritten, **optimized** version of your program, focusing on significant hot spots (from the line profiler) and leveraging locality, reduced allocations, and fast Python idioms. The main bottleneck is **`_find_api_key`**, particularly attribute access and lower-casing/search, and the **dict filtering** at the bottom of `convert_llm`. 

Key changes for speed.
- **`_find_api_key`**.
  - Use a **cached set of lower patterns** for fast `in` checks.
  - Scan attributes and look up only **first** string/SecretStr-valued matching attribute—stop early.
  - Use `model.__dict__` where possible for speed, fallback to `dir()` only if needed, prefer `vars(model)` (which is essentially `__dict__`) for most models.
  - Minimize repeated operations inside loops. 
- **`convert_llm`**.
  - **Precompute** the dict filter set and use **list comprehensions** (Py3.7+ dicts preserve order and are fast).
  - Inline all known one-time representatives outside repeated control flow.
  - Only get the dict once.




**Summary of speed improvements:**
- Use `vars(model)`/`.__dict__` directly if available—faster than `dir()` and less work.
- Inline filter for key in attribute (avoiding unnecessary generator).
- Attribute access and string lowercasing only occur once per attribute.
- `convert_llm` dict filtering is now single-pass. 
- **Overall effect**: Dramatically reduces function call count, attribute access, and per-item python overhead in both hot spots.

If you want even further micro-optimization for `_find_api_key`, you may also break on the first found attribute whose value is not `None`, rather than searching all attributes—but normally there is just one such key so this won't matter for correctness or speed.
@codeflash-ai
Copy link
Contributor

codeflash-ai bot commented Jul 8, 2025

⚡️ Codeflash found optimizations for this PR

📄 163% (1.63x) speedup for convert_llm in src/backend/base/langflow/base/agents/crewai/crew.py

⏱️ Runtime : 4.74 milliseconds 1.80 milliseconds (best of 92 runs)

I created a new dependent PR with the suggested changes. Please review:

If you approve, it will be merged into this PR (branch release-1.5.0).

@ogabrielluiz ogabrielluiz merged commit 24db0cd into main Jul 8, 2025
212 of 220 checks passed
@ogabrielluiz ogabrielluiz deleted the release-1.5.0 branch July 8, 2025 12:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

fast-track Skip tests and sends PR into the merge queue ignore-for-release size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants