-
Notifications
You must be signed in to change notification settings - Fork 8.2k
chore: release 1.5.0 #8930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: release 1.5.0 #8930
Conversation
- Updated langflow version to 1.5.0 in pyproject.toml, package.json, and package-lock.json. - Updated langflow-base dependency to version 0.5.0. - Added platform markers for several dependencies in uv.lock to improve compatibility across different systems.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…ailures (#8890) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]> fix: fixes auth check for auto_login (#8796)
* Add new openai reasoning models * [autofix.ci] apply automated fixes * Updates language model, but FE doesn't send a POST for updating template atm * use chatopenai constants * [autofix.ci] apply automated fixes * Add reasoning to language model test * Remove temp from all reasoning models * t [autofix.ci] apply automated fixes * refactor: Update template notes (#8816) * update templates * small-changes * template cleanup --------- Co-authored-by: Mendon Kissling <[email protected]> * ruff * uv lock * starter projects update * [autofix.ci] apply automated fixes --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Mendon Kissling <[email protected]>
* chore: Bump version to 1.5.0 and update dependencies - Updated langflow version to 1.5.0 in pyproject.toml, package.json, and package-lock.json. - Updated langflow-base dependency to version 0.5.0. - Added platform markers for several dependencies in uv.lock to improve compatibility across different systems. * fix: fixes auth check for auto_login (#8796) * ref: improve docling template updates and error message (#8837) Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]> * Attempt to provide powershell curl command * [autofix.ci] apply automated fixes * [autofix.ci] apply automated fixes (attempt 2/3) * Added OS selector to code tabs * Added no select classes to API modal * ✨ (code-tabs.tsx): add data-testid attribute to API tab elements for testing purposes 🔧 (tweaksTest.spec.ts, curlApiGeneration.spec.ts, pythonApiGeneration.spec.ts, generalBugs-shard-3.spec.ts): update test scripts to use data-testid attribute for API tab elements instead of role attribute --------- Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Lucas Oliveira <[email protected]> Co-authored-by: cristhianzl <[email protected]>
WalkthroughThis update introduces platform-specific cURL code generation and UI in the API modal, expands OpenAI model support by distinguishing between chat and reasoning models across backend and starter project files, and refactors CrewAI component imports for safer optional dependency handling. Several UI and test adjustments accompany these changes, along with dependency and version updates in both backend and frontend. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API_Modal_UI
participant getNewCurlCode
participant Clipboard
User->>API_Modal_UI: Opens API modal
API_Modal_UI->>API_Modal_UI: Detects OS (macOS/Linux or Windows)
API_Modal_UI->>getNewCurlCode: Requests cURL code (with platform param)
getNewCurlCode-->>API_Modal_UI: Returns platform-specific cURL command
User->>API_Modal_UI: Selects/copies cURL code
API_Modal_UI->>Clipboard: Copies selected code
sequenceDiagram
participant User
participant LanguageModelComponent
participant UI
User->>LanguageModelComponent: Selects OpenAI model
LanguageModelComponent->>UI: Updates model dropdown (chat + reasoning models)
LanguageModelComponent->>UI: Hides/shows system_message input (if model starts with "o1")
LanguageModelComponent->>LanguageModelComponent: Sets temperature=None if reasoning model
Possibly related PRs
Suggested labels
Suggested reviewers
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🔭 Outside diff range comments (18)
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1)
1045-1068: Guard againstNonetemperature being passed intoChatOpenAI.
ChatOpenAIexpectsfloat | Nonefortemperature, however some versions treatNoneas “parameter not supplied” and others raise aTypeError. Instead of always forwardingNone, build the kwargs dynamically:- return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + kwargs = dict( + model_name=model_name, + streaming=stream, + openai_api_key=self.api_key, + ) + if temperature is not None: # chat-models only + kwargs["temperature"] = temperature + return ChatOpenAI(**kwargs)This prevents unexpected runtime errors when the reasoning models are selected.
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
2275-2366:update_build_configleavessystem_messagepermanently hidden after switching away from o1 modelsWhen the provider is changed (e.g., from OpenAI ➔ Anthropic)
update_build_configupdates themodel_namedropdown but never re-enables thesystem_messagefield if it had previously been hidden by ano1model selection.
Result: users of non-OpenAI providers can no longer set a system prompt.Minimal fix inside the
providerbranch:elif field_name == "provider": ... + # Always ensure system_message is visible when changing provider + if "system_message" in build_config: + build_config["system_message"]["show"] = TrueAdd before the
returnat the end of that branch.src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)
1375-1410: Avoid passingNonefortemperature; can breakChatOpenAIinitialisation
ChatOpenAI(LangChain ≤ 0.2) expects afloat; givingNonewill raiseTypeError.
Instead of forcingtemperature=None, omit the parameter when the model is inOPENAI_REASONING_MODEL_NAMES.- if model_name in OPENAI_REASONING_MODEL_NAMES: - # reasoning models do not support temperature (yet) - temperature = None - - return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + openai_kwargs = { + "model_name": model_name, + "streaming": stream, + "openai_api_key": self.api_key, + } + # Reasoning models currently ignore temperature; only add when supported + if model_name not in OPENAI_REASONING_MODEL_NAMES: + openai_kwargs["temperature"] = temperature + + return ChatOpenAI(**openai_kwargs)src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)
2563-2620:system_messagecan stay hidden when switching providers
update_build_confighides the system_message field when an OpenAI o1 model is selected, but never restores it when the user later switches to Anthropic/Google or a non-o1OpenAI model via a provider change. The UI will therefore permanently lose that input until the page is refreshed.@@ if field_name == "provider": + # Ensure system_message is always visible when leaving OpenAI-o1 context + if field_value != "OpenAI" and "system_message" in build_config: + build_config["system_message"]["show"] = True + if field_value == "OpenAI": ... - elif field_value == "Anthropic": + elif field_value == "Anthropic": ... + if "system_message" in build_config: + build_config["system_message"]["show"] = True elif field_value == "Google": ... + if "system_message" in build_config: + build_config["system_message"]["show"] = TrueThis guarantees the field visibility is reset whenever the provider no longer imposes the o1 limitation.
2563-2590: Class body duplicated twice in the same flowThe identical
LanguageModelComponentsource string is embedded in two separate nodes. Keeping two diverging copies will inevitably drift. Extract a single component insrc/backend/base/langflow/components/models/language_model.pyand reference it from both nodes.src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (2)
2626-2643: Potentialtemperature=Noneincompatibility withChatOpenAI
ChatOpenAI(langchain_openai) expectstemperature: float | None = 0.7on its pydantic model.
Early builds of the provider acceptedNone, but recent releases validate the field asfloat ≥ 0. PassingNonecan therefore raise aValidationErrorat runtime with the latest LangChain.- if model_name in OPENAI_REASONING_MODEL_NAMES: - # reasoning models do not support temperature (yet) - temperature = None + if model_name in OPENAI_REASONING_MODEL_NAMES: + # reasoning models ignore temperature – keep the slider visible + # but force-set a neutral value accepted by the SDK. + temperature = 0.0A defensive
0.0keeps API parity while effectively disabling randomness.
Consider also hiding / disabling the Temperature slider in the UI for reasoning models for clarity.
2660-2674:update_build_configmisses “o1” visibility toggle after provider switchWhen the user switches the provider back to OpenAI, the
model_namefield is repopulated but thesystem_messagevisibility is not re-evaluated. This leaves the field visible even though the default model may start witho1, contradicting the logic below.Add a post-update sanity check:
build_config["model_name"]["value"] = OPENAI_CHAT_MODEL_NAMES[0] build_config["api_key"]["display_name"] = "OpenAI API Key" + # Re-apply system_message visibility based on the new default + if OPENAI_CHAT_MODEL_NAMES[0].startswith("o1"): + build_config["system_message"]["show"] = False + else: + build_config["system_message"]["show"] = TrueApplies symmetrically when provider is changed away from or back to OpenAI.
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)
2068-2090: Starter-project template still ships only chat models – reasoning models won’t be selectable on first load
inputs[1]now correctly builds the runtime template with
options = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES, but the persisted template that ships with this starter-project (see"model_name".optionsa few lines below) is still the old hard-coded chat-only list.When users open this project for the very first time (provider already =
"OpenAI"),update_build_configwill not be invoked, so the reasoning models are invisible until the user manually flips the provider away and back. That defeats the purpose of exposing those new models.Either:
- "options": ["gpt-4o-mini", "gpt-4o", ...] # chat-only + "options": [], # leave emptyand rely on
update_build_configto populate the list on component initialisation, or regenerate the starter-project JSON after the code change so that both chat and reasoning models are already present.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2060-2075: UI hiding logic can desynchronise when switching providers
update_build_config()hidessystem_messagewhenmodel_namestarts with"o1", but only if the current provider is OpenAI. If the user:
- Selects an
"o1"model (field hidden), then- Switches provider to Anthropic / Google,
system_message.showremainsFalse, silently hiding a perfectly valid field.Consider resetting the flag on every provider change:
if field_name == "provider": ... + # Always re-enable system_message when provider changes + if "system_message" in build_config: + build_config["system_message"]["show"] = Truesrc/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (2)
1531-1587:update_build_configleaves system_message permanently hidden when the user changes providerIf the user selects an OpenAI o1-* model,
system_message.showis set toFalse.
Later, if the user switches the provider (e.g. to Anthropic or Google), the flag is never reset because theproviderbranch does not touchsystem_message.Result: for every non-OpenAI provider the System Message field may stay invisible, breaking the UI.
@@ - if field_name == "provider": + if field_name == "provider": ... + # (Re)-enable system_message visibility when leaving OpenAI-o1 context + if "system_message" in build_config: + build_config["system_message"]["show"] = TrueThis one-liner guarantees consistent behaviour across provider switches.
1600-1642: Temperature slider still shown for reasoning models that ignore temperature
build_modeloverridestemperature = Nonefor models inOPENAI_REASONING_MODEL_NAMES, but the Temperature slider remains visible and editable in the UI.Consider hiding or disabling the slider when a reasoning model is selected to avoid confusion:
@@ elif field_name == "model_name": ... + # Hide temperature control for reasoning models (no-op) + if field_value in OPENAI_REASONING_MODEL_NAMES: + build_config["temperature"]["show"] = False + else: + build_config["temperature"]["show"] = Truesrc/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (2)
1305-1340: Prefix check breakssystem_messagetoggle for reasoning models
update_build_confighides thesystem_messageinput only when
field_value.startswith("o1"), but every entry inOPENAI_REASONING_MODEL_NAMES
is prefixed withgpt-4o(e.g. gpt-4o-mini).
Consequently the field is never hidden for reasoning models and UI/UX
divergence sneaks in.-elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI": +elif ( + field_name == "model_name" + and field_value in OPENAI_REASONING_MODEL_NAMES + and self.provider == "OpenAI" +):Mirror the same condition in the “show again” branch.
Without this fix users can enter a system message that the backend model then
ignores or errors on.
1340-1360:system_messagevisibility not reset when provider changesIf a user selects a reasoning model (field hidden) and then switches the
provider to Anthropic/Google,system_messageremains hidden forever because
the provider branch never re-enables it.if field_name == "provider": ... build_config["api_key"]["display_name"] = "Anthropic API Key" + # restore fields that might have been hidden by a previous selection + if "system_message" in build_config: + build_config["system_message"]["show"] = TrueDo the same for the Google branch. This guarantees a predictable UI across
provider switches.src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)
1528-1545:system_messagevisibility toggle lacks provider guardWhen
field_name == "model_name"and the model does not start with"o1", the code unconditionally setssystem_message.show = True, even for Anthropic/Google providers where the field might already have a different visibility state.
Safer logic:- elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config: + elif ( + field_name == "model_name" + and self.provider == "OpenAI" + and not field_value.startswith("o1") + and "system_message" in build_config + ):Prevents accidental UI flicker for non-OpenAI providers.
1495-1511: Enforce float temperature for ChatOpenAI
ChatOpenAIrequires afloatbetween 0 and 2; passingNonewill raise aValidationError. For reasoning models—where you want deterministic output—settemperatureto 0 instead ofNone:
- File:
src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json
Location:build_model, inside theif model_name in OPENAI_REASONING_MODEL_NAMESblock- if model_name in OPENAI_REASONING_MODEL_NAMES: - # reasoning models do not support temperature (yet) - temperature = None + if model_name in OPENAI_REASONING_MODEL_NAMES: + # reasoning models do not support temperature; use deterministic output + temperature = 0This change prevents a runtime
ValidationErrorfromChatOpenAI.src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)
1638-1679: Guard againsttemperature=Noneincompatibility inChatOpenAIcallSome LangChain providers (incl.
ChatOpenAI) still validatetemperatureasfloat | intand raise onNone.
To stay API-safe for older library patch levels, pass the argument only when a numeric value is required:- if model_name in OPENAI_REASONING_MODEL_NAMES: - # reasoning models do not support temperature (yet) - temperature = None - - return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + if model_name in OPENAI_REASONING_MODEL_NAMES: + # reasoning models do not support temperature (yet) + return ChatOpenAI( + model_name=model_name, + streaming=stream, + openai_api_key=self.api_key, + ) + return ChatOpenAI( + model_name=model_name, + temperature=temperature, + streaming=stream, + openai_api_key=self.api_key, + )This avoids a potential
TypeErroron older LangChain versions and keeps the public signature intact.src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (2)
1171-1175: Static “langflow” User-Agent may reduce crawl successThe default header has been changed from the dynamic
get_settings_service().settings.user_agentvalue to the hard-coded string"langflow".
Many sites block or throttle requests with non-standard or generic user-agents, so hard-coding this could noticeably lower the success rate ofURLComponent.- value=[{"key": "User-Agent", "value": get_settings_service().settings.user_agent}], + value=[{"key": "User-Agent", "value": get_settings_service().settings.user_agent}],If you really need a fixed UA, at least preserve the old behaviour as a fallback when the user leaves the field blank.
1409-1450: Passingtemperature=NonetoChatOpenAIis risky
ChatOpenAIexpectstemperatureto be a float (0-2). PassingNonerelies on internal default handling that may change and could raiseTypeErrorin future SDK versions.- # reasoning models do not support temperature (yet) - temperature = None + # Reasoning models currently ignore temperature – pick safe default + temperature = 0.0Alternatively, omit the parameter entirely when it is
None.
♻️ Duplicate comments (7)
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (1)
2856-2915: Same issues as above – see previous comments.src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)
1819-1881: Same issue as above — duplicate copy ofLanguageModelComponentThe second embedded
LanguageModelComponentrepeats the exact code and therefore inherits the UI-visibility bug for system_message and the temperature slider. Apply the same fixes to keep both nodes in sync.src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (2)
1601-1630: Duplicate of the first issue for the secondLanguageModelComponentblock –
please apply the same prefix-check fix here.
1896-1925: Duplicate of the first two issues for the thirdLanguageModelComponentblock –
ensure both fixes propagate.src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)
1791-1807: Duplicate of the issues raised for lines 1495-1550 (temperature handling & provider guard).
2086-2102: Duplicate of the issues raised for lines 1495-1550 (temperature handling & provider guard).src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)
1928-1969: Sametemperature=Noneconcern applies here – see previous comment.
🧹 Nitpick comments (15)
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)
92-95: Consider simplifying the JSON formatting for better consistency.The current formatting mixes spaces and tabs which may render inconsistently across different terminals. Consider using only spaces or only tabs for indentation.
- const unixFormattedPayload = JSON.stringify(processedPayload, null, 2) - .split("\n") - .map((line, index) => (index === 0 ? line : " " + line)) - .join("\n\t\t"); + const unixFormattedPayload = JSON.stringify(processedPayload, null, 2) + .split("\n") + .map((line, index) => (index === 0 ? line : " " + line)) + .join("\n ");src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
1082-1082: Deduplicate chat & reasoning model lists
OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMESrisks duplicate entries if a model appears in both constants.
A tiny one-liner avoids duplicates while preserving order:- options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES, + options=list(dict.fromkeys(OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES)),src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (2)
1095-1120: Hide the temperature slider when a reasoning model is chosen.
update_build_configcorrectly toggles thesystem_messagefield foro1*models, but thetemperatureinput remains visible even though it will be ignored (and may mislead users). Extend the conditional block:- elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI": + elif field_name == "model_name" and field_value in OPENAI_REASONING_MODEL_NAMES: # Hide system_message for o1 models - currently unsupported if "system_message" in build_config: build_config["system_message"]["show"] = False + if "temperature" in build_config: + build_config["temperature"]["show"] = False - elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config: + elif field_name == "model_name" and field_value not in OPENAI_REASONING_MODEL_NAMES: build_config["system_message"]["show"] = True + if "temperature" in build_config: + build_config["temperature"]["show"] = TrueKeeping the UI in sync with backend capabilities avoids confusion.
1121-1135: Synchronise API-key placeholders / default values with the selected provider.
update_build_configaltersdisplay_namebut leaves the defaultvalueset to"OPENAI_API_KEY". Users switching to Anthropic or Google will still see the OpenAI placeholder, which often causes 401 errors at runtime.- build_config["api_key"]["display_name"] = "Anthropic API Key" + build_config["api_key"]["display_name"] = "Anthropic API Key" + build_config["api_key"]["value"] = "ANTHROPIC_API_KEY" ... - build_config["api_key"]["display_name"] = "Google API Key" + build_config["api_key"]["display_name"] = "Google API Key" + build_config["api_key"]["value"] = "GOOGLE_API_KEY"A matching placeholder greatly improves onboarding and reduces mis-configuration.
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)
1325-1345: Minor:inputsdefault still shows “OpenAI API Key” for non-OpenAI providers
update_build_configupdates the field’s display_name when the provider changes, but the initial static definition hard-codes “OpenAI API Key”. Consider moving the generic label (e.g. “Provider API Key”) to the class-level constant to avoid a brief mismatch in the UI before the firstproviderchange event fires.src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (1)
2591-2605: Minor: build-time option list creation looks expensive on every import
options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMESis evaluated at import time for every node instantiation. Cache once:- DropdownInput( - name="model_name", - display_name="Model Name", - options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES, + _OPENAI_MODEL_OPTIONS = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES + DropdownInput( + name="model_name", + display_name="Model Name", + options=_OPENAI_MODEL_OPTIONS,A micro-optimisation but keeps the template string shorter.
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)
1835-1854: Guard againsttemperature=Nonebeing forwarded to the provider
ChatOpenAIaccepts a float fortemperature; some provider adapters treatNoneas an invalid value and will raise a validation error.
Rather than always passing the key, build the kwargs dynamically so that the field is omitted when you intentionally disable temperature for reasoning models.- return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + kwargs = { + "model_name": model_name, + "streaming": stream, + "openai_api_key": self.api_key, + } + if temperature is not None: + kwargs["temperature"] = temperature + return ChatOpenAI(**kwargs)This keeps the constructor call future-proof and avoids surprising runtime errors if the upstream library tightens its validation.
1860-1879: Reduce duplication inupdate_build_configwith a provider-mapThe three
if/elifblocks that mutatemodel_name.options/valueand theapi_key.display_namediffer only in the constant lists and label. Consider collapsing them into a single lookup table to make future provider additions one-liner changes:PROVIDER_CONFIG = { "OpenAI": ( OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES, OPENAI_CHAT_MODEL_NAMES[0], "OpenAI API Key", ), "Anthropic": (ANTHROPIC_MODELS, ANTHROPIC_MODELS[0], "Anthropic API Key"), "Google": (GOOGLE_GENERATIVE_AI_MODELS, GOOGLE_GENERATIVE_AI_MODELS[0], "Google API Key"), } if field_name == "provider" and field_value in PROVIDER_CONFIG: opts, default, key_label = PROVIDER_CONFIG[field_value] build_config["model_name"]["options"] = opts build_config["model_name"]["value"] = default build_config["api_key"]["display_name"] = key_labelThis trims ~20 lines of repetitive code and makes the intent clearer.
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2)
2091-2113: Avoid sending an explicittemperature=NonetoChatOpenAI
langchain-openaicurrently typestemperature: float | None, but in practiceNoneis passed straight through to the OpenAI HTTP API, which rejects it for reasoning models (o1*).
Skip the kwarg when it is not applicable:- return ChatOpenAI( - model_name=model_name, - temperature=temperature, - streaming=stream, - openai_api_key=self.api_key, - ) + temp_kw = {} if temperature is None else {"temperature": temperature} + return ChatOpenAI( + model_name=model_name, + streaming=stream, + openai_api_key=self.api_key, + **temp_kw, + )This keeps the signature clean and prevents avoidable 400-errors from the OpenAI endpoint.
2361-2380: Duplicate component code block – consider DRYing via a shared moduleThe second
LanguageModelComponentembeds an identical 140-line definition that already exists in the earlier node. Duplicating sizeable code strings inside starter-project JSONs increases bundle size and raises the maintenance burden (future fixes must be applied twice).If both nodes need the same class unchanged, import it from a single custom component module and reference that instead of inlining two copies.
src/backend/base/langflow/components/models/language_model.py (1)
138-143: Consider making the reasoning model detection more robust.The hardcoded
"o1"prefix check works for current OpenAI reasoning models but may be brittle if OpenAI changes their naming convention. Consider using theOPENAI_REASONING_MODEL_NAMESlist for more robust detection.- elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI": + elif field_name == "model_name" and field_value in OPENAI_REASONING_MODEL_NAMES and self.provider == "OpenAI":- elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config: + elif field_name == "model_name" and field_value not in OPENAI_REASONING_MODEL_NAMES and "system_message" in build_config:src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2323-2486: Exact duplicate of the component code – favour DRYThe second
LanguageModelComponentblock (lines 2323-2486) is an identical copy of the first one. Keeping two embedded copies inside the same starter-project JSON:
- inflates bundle size,
- complicates future fixes (must patch in two places),
- risks the copies diverging.
Store the component once and reference it twice (e.g. by node id) or factor the code into an importable module and import it via
codefield.src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)
995-995: Hide/disable temperature slider for reasoning models
o1/reasoning models silently override the UI-selected temperature withNone.
Better UX: toggle the Temperature field’sshowflag toFalsewhen a reasoning model is chosen (similar to howsystem_messageis hidden) so users don’t think the knob still applies.Implementation fits naturally inside the existing
update_build_configbranch that checksmodel_name.src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)
1495-1550: Identical 300-line component code duplicated three timesThe full
LanguageModelComponentsource is embedded verbatim in three separate nodes. That bloats the starter-project JSON by ~900 lines, complicates future maintenance, and risks silent drift between copies.Consider:
- Storing the component once (e.g.,
langflow/custom_components/language_model_component.py).- Referencing it in nodes via the
module_namemetadata instead of embedding raw code.Keeps starter projects small and avoids triple edits next time you tweak the component.
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)
1519-1525:system_messageflagged as advanced in template but not in codeThe backend
inputsdefinition setsadvanced=False, yet the rendered template still shows"advanced": true.
This mismatch causes inconsistent UI behaviour across starter projects. Align the flags so both frontend and backend agree.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
src/frontend/package-lock.jsonis excluded by!**/package-lock.jsonuv.lockis excluded by!**/*.lock
📒 Files selected for processing (59)
.github/workflows/release.yml(2 hunks)pyproject.toml(3 hunks)src/backend/base/langflow/base/agents/crewai/crew.py(6 hunks)src/backend/base/langflow/base/agents/crewai/tasks.py(1 hunks)src/backend/base/langflow/base/data/docling_utils.py(1 hunks)src/backend/base/langflow/base/models/model.py(2 hunks)src/backend/base/langflow/base/models/openai_constants.py(4 hunks)src/backend/base/langflow/components/crewai/crewai.py(2 hunks)src/backend/base/langflow/components/crewai/hierarchical_crew.py(2 hunks)src/backend/base/langflow/components/crewai/hierarchical_task.py(1 hunks)src/backend/base/langflow/components/crewai/sequential_crew.py(2 hunks)src/backend/base/langflow/components/crewai/sequential_task.py(2 hunks)src/backend/base/langflow/components/crewai/sequential_task_agent.py(2 hunks)src/backend/base/langflow/components/data/url.py(3 hunks)src/backend/base/langflow/components/docling/export_docling_document.py(3 hunks)src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py(2 hunks)src/backend/base/langflow/components/models/language_model.py(6 hunks)src/backend/base/langflow/components/openai/openai_chat_model.py(5 hunks)src/backend/base/langflow/custom/custom_component/component.py(1 hunks)src/backend/base/langflow/initial_setup/load.py(0 hunks)src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Market Research.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Simple Agent.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json(1 hunks)src/backend/base/langflow/services/auth/utils.py(2 hunks)src/backend/base/langflow/utils/async_helpers.py(1 hunks)src/backend/base/langflow/utils/constants.py(2 hunks)src/backend/base/langflow/utils/util.py(1 hunks)src/backend/base/pyproject.toml(1 hunks)src/backend/tests/unit/api/v1/test_starter_projects.py(1 hunks)src/backend/tests/unit/components/agents/test_agent_component.py(2 hunks)src/backend/tests/unit/components/models/test_language_model_component.py(2 hunks)src/backend/tests/unit/custom/custom_component/test_component.py(2 hunks)src/backend/tests/unit/test_async_helpers.py(1 hunks)src/frontend/package.json(1 hunks)src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx(3 hunks)src/frontend/src/modals/apiModal/index.tsx(1 hunks)src/frontend/src/modals/apiModal/utils/get-curl-code.tsx(1 hunks)src/frontend/src/style/applies.css(1 hunks)src/frontend/tests/core/features/tweaksTest.spec.ts(2 hunks)src/frontend/tests/extended/features/curlApiGeneration.spec.ts(1 hunks)src/frontend/tests/extended/features/pythonApiGeneration.spec.ts(1 hunks)src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts(1 hunks)
💤 Files with no reviewable changes (1)
- src/backend/base/langflow/initial_setup/load.py
🧰 Additional context used
📓 Path-based instructions (14)
`src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*}`: Fr...
src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*}: Frontend configuration files such as 'package.json', 'tsconfig.json', 'tailwind.config.', and 'vite.config.' must be present and properly maintained in 'src/frontend/'.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)
List of files the instruction was applied to:
src/frontend/package.json
`src/frontend/**/*.{ts,tsx,js,jsx,css,scss}`: Use Tailwind CSS for styling all frontend components.
src/frontend/**/*.{ts,tsx,js,jsx,css,scss}: Use Tailwind CSS for styling all frontend components.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)
List of files the instruction was applied to:
src/frontend/src/style/applies.csssrc/frontend/src/modals/apiModal/index.tsxsrc/frontend/tests/core/features/tweaksTest.spec.tssrc/frontend/tests/extended/regression/generalBugs-shard-3.spec.tssrc/frontend/tests/extended/features/curlApiGeneration.spec.tssrc/frontend/tests/extended/features/pythonApiGeneration.spec.tssrc/frontend/src/modals/apiModal/utils/get-curl-code.tsxsrc/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx
`src/backend/base/langflow/components/**/*.py`: Add new backend components to th...
src/backend/base/langflow/components/**/*.py: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Implement async component methods using async def and await for asynchronous operations
Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
List of files the instruction was applied to:
src/backend/base/langflow/components/crewai/hierarchical_task.pysrc/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.pysrc/backend/base/langflow/components/data/url.pysrc/backend/base/langflow/components/crewai/crewai.pysrc/backend/base/langflow/components/crewai/sequential_task.pysrc/backend/base/langflow/components/crewai/sequential_crew.pysrc/backend/base/langflow/components/crewai/sequential_task_agent.pysrc/backend/base/langflow/components/crewai/hierarchical_crew.pysrc/backend/base/langflow/components/docling/export_docling_document.pysrc/backend/base/langflow/components/openai/openai_chat_model.pysrc/backend/base/langflow/components/models/language_model.py
`src/backend/**/*.py`: Run make format_backend to format Python code early and often Run make lint to check for linting issues in backend Python code
src/backend/**/*.py: Run make format_backend to format Python code early and often
Run make lint to check for linting issues in backend Python code
📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
List of files the instruction was applied to:
src/backend/base/langflow/components/crewai/hierarchical_task.pysrc/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.pysrc/backend/tests/unit/api/v1/test_starter_projects.pysrc/backend/base/langflow/base/data/docling_utils.pysrc/backend/base/langflow/components/data/url.pysrc/backend/base/langflow/base/models/model.pysrc/backend/base/langflow/base/agents/crewai/tasks.pysrc/backend/base/langflow/utils/util.pysrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/base/langflow/components/crewai/crewai.pysrc/backend/base/langflow/utils/async_helpers.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/base/langflow/base/models/openai_constants.pysrc/backend/base/langflow/components/crewai/sequential_task.pysrc/backend/base/langflow/components/crewai/sequential_crew.pysrc/backend/base/langflow/components/crewai/sequential_task_agent.pysrc/backend/base/langflow/utils/constants.pysrc/backend/base/langflow/components/crewai/hierarchical_crew.pysrc/backend/tests/unit/test_async_helpers.pysrc/backend/base/langflow/components/docling/export_docling_document.pysrc/backend/base/langflow/services/auth/utils.pysrc/backend/base/langflow/custom/custom_component/component.pysrc/backend/tests/unit/custom/custom_component/test_component.pysrc/backend/base/langflow/components/openai/openai_chat_model.pysrc/backend/base/langflow/components/models/language_model.pysrc/backend/base/langflow/base/agents/crewai/crew.py
`src/backend/**/components/**/*.py`: In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/**/components/**/*.py: In your Python component class, set theiconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
📄 Source: CodeRabbit Inference Engine (.cursor/rules/icons.mdc)
List of files the instruction was applied to:
src/backend/base/langflow/components/crewai/hierarchical_task.pysrc/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.pysrc/backend/base/langflow/components/data/url.pysrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/base/langflow/components/crewai/crewai.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/base/langflow/components/crewai/sequential_task.pysrc/backend/base/langflow/components/crewai/sequential_crew.pysrc/backend/base/langflow/components/crewai/sequential_task_agent.pysrc/backend/base/langflow/components/crewai/hierarchical_crew.pysrc/backend/base/langflow/components/docling/export_docling_document.pysrc/backend/base/langflow/components/openai/openai_chat_model.pysrc/backend/base/langflow/components/models/language_model.py
`src/frontend/**/*.{ts,tsx}`: Use React 18 with TypeScript for all UI components and frontend logic.
src/frontend/**/*.{ts,tsx}: Use React 18 with TypeScript for all UI components and frontend logic.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/frontend_development.mdc)
List of files the instruction was applied to:
src/frontend/src/modals/apiModal/index.tsxsrc/frontend/tests/core/features/tweaksTest.spec.tssrc/frontend/tests/extended/regression/generalBugs-shard-3.spec.tssrc/frontend/tests/extended/features/curlApiGeneration.spec.tssrc/frontend/tests/extended/features/pythonApiGeneration.spec.tssrc/frontend/src/modals/apiModal/utils/get-curl-code.tsxsrc/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx
`src/backend/tests/unit/**/*.py`: Use in-memory SQLite for database tests Test c...
src/backend/tests/unit/**/*.py: Use in-memory SQLite for database tests
Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs
📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/api/v1/test_starter_projects.pysrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/tests/unit/test_async_helpers.pysrc/backend/tests/unit/custom/custom_component/test_component.py
`src/backend/tests/**/*.py`: Unit tests for backend code should be located in 's...
src/backend/tests/**/*.py: Unit tests for backend code should be located in 'src/backend/tests/' and organized by component subdirectory for component tests.
Test files should use the same filename as the component with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests, as defined in 'src/backend/tests/conftest.py'.
Skip client creation in tests by marking them with '@pytest.mark.noclient' when the 'client' fixture is not needed.
Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/api/v1/test_starter_projects.pysrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/tests/unit/test_async_helpers.pysrc/backend/tests/unit/custom/custom_component/test_component.py
`{src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/...
{src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py}: Each test should have a clear docstring explaining its purpose.
Complex test setups should be commented, and mock usage should be documented within the test code.
Expected behaviors should be explicitly stated in test docstrings or comments.
Create comprehensive unit tests for all new components.
Test both sync and async code paths in components.
Mock external dependencies appropriately in tests.
Test error handling and edge cases in components.
Validate input/output behavior in tests.
Test component initialization and configuration.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/api/v1/test_starter_projects.pysrc/frontend/tests/core/features/tweaksTest.spec.tssrc/frontend/tests/extended/regression/generalBugs-shard-3.spec.tssrc/frontend/tests/extended/features/curlApiGeneration.spec.tssrc/frontend/tests/extended/features/pythonApiGeneration.spec.tssrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/tests/unit/test_async_helpers.pysrc/backend/tests/unit/custom/custom_component/test_component.py
`{src/backend/tests/**/*.py,tests/**/*.py}`: Use '@pytest.mark.asyncio' for asyn...
{src/backend/tests/**/*.py,tests/**/*.py}: Use '@pytest.mark.asyncio' for async test functions.
Test queue operations in async tests using 'asyncio.Queue' and non-blocking put/get methods.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests.
Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Each test should ensure proper resource cleanup, especially in async fixtures using 'try/finally' and cleanup methods.
Test that operations respect timeout constraints and assert elapsed time is within tolerance.
Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Use predefined JSON flows and utility functions for flow testing (e.g., 'create_flow', 'build_flow', 'get_build_events', 'consume_and_assert_stream').
Test components that need external APIs with proper pytest markers such as '@pytest.mark.api_key_required' and '@pytest.mark.no_blockbuster'.
Use 'MockLanguageModel' for testing language model components without external API calls.
Use 'anyio' and 'aiofiles' for async file operations in tests.
Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Test component configuration updates by asserting changes in build config dictionaries.
Test real-time event streaming endpoints by consuming NDJSON event streams and validating event structure.
Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Test webhook endpoints by posting payloads and asserting correct processing and status codes.
Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/api/v1/test_starter_projects.pysrc/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/tests/unit/test_async_helpers.pysrc/backend/tests/unit/custom/custom_component/test_component.py
`src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx}`: Frontend test files should be ...
src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx}: Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Frontend tests should mock external dependencies and APIs appropriately.
Frontend tests should validate input/output behavior and component state changes.
Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
📄 Source: CodeRabbit Inference Engine (.cursor/rules/testing.mdc)
List of files the instruction was applied to:
src/frontend/tests/core/features/tweaksTest.spec.tssrc/frontend/tests/extended/regression/generalBugs-shard-3.spec.tssrc/frontend/tests/extended/features/curlApiGeneration.spec.tssrc/frontend/tests/extended/features/pythonApiGeneration.spec.ts
`{uv.lock,pyproject.toml}`: Use uv (>=0.4) as the Python package manager for dependency management
{uv.lock,pyproject.toml}: Use uv (>=0.4) as the Python package manager for dependency management
📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
List of files the instruction was applied to:
pyproject.toml
`src/backend/tests/unit/components/**/*.py`: Mirror the component directory stru...
src/backend/tests/unit/components/**/*.py: Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping in tests for backward compatibility version testing
Create comprehensive unit tests for all new components
Use the client fixture from conftest.py for FastAPI API endpoint tests
Test authenticated FastAPI endpoints using logged_in_headers in tests
📄 Source: CodeRabbit Inference Engine (.cursor/rules/backend_development.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.py
`src/backend/**/*component*.py`: In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/**/*component*.py: In your Python component class, set theiconattribute to a string matching the frontend icon mapping exactly (case-sensitive).
📄 Source: CodeRabbit Inference Engine (.cursor/rules/icons.mdc)
List of files the instruction was applied to:
src/backend/tests/unit/components/models/test_language_model_component.pysrc/backend/tests/unit/components/agents/test_agent_component.pysrc/backend/base/langflow/custom/custom_component/component.pysrc/backend/tests/unit/custom/custom_component/test_component.py
🧠 Learnings (50)
src/frontend/package.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/{package*.json,tsconfig.json,tailwind.config.*,vite.config.*} : Frontend configuration files such as 'package.json', 'tsconfig.json', 'tailwind.config.*', and 'vite.config.*' must be present and properly maintained in 'src/frontend/'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*FlowGraph.tsx : Use React Flow for flow graph visualization components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/frontend/src/style/applies.css (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx,js,jsx,css,scss} : Use Tailwind CSS for styling all frontend components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*.{ts,tsx} : All components must be styled using Tailwind CSS utility classes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: All UI components must be styled using Tailwind CSS utility classes, with support for different variants and sizes implemented via conditional className logic.
src/backend/base/langflow/components/crewai/hierarchical_task.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/frontend/src/modals/apiModal/index.tsx (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/{components,hooks}/**/*.{ts,tsx} : Implement dark mode support in components and hooks where needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*.{ts,tsx} : All components must be styled using Tailwind CSS utility classes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx,js,jsx,css,scss} : Use Tailwind CSS for styling all frontend components.
src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/tests/unit/api/v1/test_starter_projects.py (11)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test webhook endpoints by posting payloads and asserting correct processing and status codes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Use the 'client' fixture (an async httpx.AsyncClient) for API tests, as defined in 'src/backend/tests/conftest.py'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use the client fixture from conftest.py for FastAPI API endpoint tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Test authenticated FastAPI endpoints using logged_in_headers in tests
src/frontend/tests/core/features/tweaksTest.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.{test,spec}.{ts,tsx} : Integration tests must be written for page-level components and flows.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.test.{ts,tsx} : All frontend components must have associated tests using React Testing Library.
src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should cover both sync and async code paths, including error handling and edge cases.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/**/__tests__/**/*.{test,spec}.{ts,tsx} : Integration tests must be written for page-level components and flows.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
src/backend/base/langflow/base/models/model.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
pyproject.toml (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/frontend/tests/extended/features/curlApiGeneration.spec.ts (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
src/backend/base/langflow/base/agents/crewai/tasks.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/base/pyproject.toml (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
src/frontend/tests/extended/features/pythonApiGeneration.spec.ts (10)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Validate input/output behavior in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Expected behaviors should be explicitly stated in test docstrings or comments.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should validate input/output behavior and component state changes.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should be well-documented with clear descriptions of test purpose and expected behavior.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Create comprehensive unit tests for all new components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend tests should mock external dependencies and APIs appropriately.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/frontend/**/*.@(test|spec).{ts,tsx,js,jsx} : Frontend test files should be named with '.test.' or '.spec.' before the extension (e.g., 'Component.test.tsx', 'Component.spec.js').
src/backend/tests/unit/components/models/test_language_model_component.py (8)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Provide file_names_mapping in tests for backward compatibility version testing
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
src/backend/base/langflow/components/crewai/crewai.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/utils/async_helpers.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'anyio' and 'aiofiles' for async file operations in tests.
src/backend/tests/unit/components/agents/test_agent_component.py (9)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Provide file_names_mapping in tests for backward compatibility version testing
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Test files should use the same filename as the component with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
src/backend/base/langflow/base/models/openai_constants.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Use clear, recognizable, and consistent icon names for both backend and frontend (e.g., 'AstraDB', 'Postgres', 'OpenAI').
src/backend/base/langflow/components/crewai/sequential_task.py (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/backend/base/langflow/components/crewai/sequential_crew.py (6)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
src/backend/base/langflow/components/crewai/sequential_task_agent.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.Queue for non-blocking queue operations in async components and handle timeouts appropriately
src/backend/base/langflow/components/crewai/hierarchical_crew.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/backend/**/components/**/*.py : In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/backend/**/*component*.py : In your Python component class, set the `icon` attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/tests/unit/test_async_helpers.py (14)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Be aware of ContextVar propagation in async tests and test both direct event loop execution and 'asyncio.to_thread' scenarios.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's REST API endpoints using the async 'client' fixture and assert correct status codes and response structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use '@pytest.mark.asyncio' for async test functions.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test both sync and async code paths in components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'anyio' and 'aiofiles' for async file operations in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test queue operations in async tests using 'asyncio.Queue' and non-blocking put/get methods.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Test component integration within flows using create_flow, build_flow, and get_build_events utilities
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Use asyncio.create_task for background work in async components and ensure proper cleanup on cancellation
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Create comprehensive unit tests for all new components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test error handling by monkeypatching internal functions to raise exceptions and asserting correct error responses.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Context variables may not propagate correctly in asyncio.to_thread; test both patterns
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test that operations respect timeout constraints and assert elapsed time is within tolerance.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Each test should ensure proper resource cleanup, especially in async fixtures using 'try/finally' and cleanup methods.
src/backend/base/langflow/components/docling/export_docling_document.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/base/langflow/services/auth/utils.py (2)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Use pytest.mark.api_key_required and pytest.mark.no_blockbuster for tests involving external APIs
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Test authenticated FastAPI endpoints using logged_in_headers in tests
src/backend/base/langflow/custom/custom_component/component.py (1)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
src/backend/tests/unit/custom/custom_component/test_component.py (11)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Create comprehensive unit tests for all new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Inherit from the appropriate ComponentTestBase class ('ComponentTestBase', 'ComponentTestBaseWithClient', or 'ComponentTestBaseWithoutClient') and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping' when adding a new component test.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Mirror the component directory structure in unit tests under src/backend/tests/unit/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/components/**/*.py : Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test components that need external APIs with proper pytest markers such as '@pytest.mark.api_key_required' and '@pytest.mark.no_blockbuster'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Test component initialization and configuration.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test component configuration updates by asserting changes in build config dictionaries.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to src/backend/tests/**/*.py : Skip client creation in tests by marking them with '@pytest.mark.noclient' when the 'client' fixture is not needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Complex test setups should be commented, and mock usage should be documented within the test code.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,src/frontend/**/*.test.{ts,tsx,js,jsx},src/frontend/**/*.spec.{ts,tsx,js,jsx},tests/**/*.py} : Mock external dependencies appropriately in tests.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/tests/unit/**/*.py : Test component integration within flows using create_flow, build_flow, and get_build_events utilities
src/backend/base/langflow/components/openai/openai_chat_model.py (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
src/backend/base/langflow/components/models/language_model.py (5)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (6)
undefined
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (7)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (5)
undefined
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests//*.py,tests//*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (9)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test backward compatibility across Langflow versions by mapping component files to supported versions using 'VersionComponentMapping'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (2)
undefined
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (7)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (4)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Test Langflow's 'Message' objects and chat functionality by asserting correct properties and structure.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (13)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/**/*.{ts,tsx} : Use React 18 with TypeScript for all UI components and frontend logic.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/{components,hooks}/**/*.{ts,tsx} : Implement dark mode support in components and hooks where needed.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: Export custom icon components in React using React.forwardRef to ensure proper ref forwarding and compatibility with parent components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: Custom React Flow node types should be implemented as memoized components, using Handle components for connection points and supporting optional icons and labels.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/components/**/*FlowGraph.tsx : Use React Flow for flow graph visualization components.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-30T14:40:29.510Z
Learning: Applies to src/frontend/src/icons/**/*.{ts,tsx,js,jsx} : Use Lucide React for icons in frontend components.
Learnt from: dolfim-ibm
PR: langflow-ai/langflow#8394
File: src/frontend/src/icons/Docling/index.tsx:4-6
Timestamp: 2025-06-16T11:14:04.200Z
Learning: The Langflow codebase consistently uses `React.PropsWithChildren<{}>` as the prop type for all icon components using forwardRef, rather than `React.SVGProps<SVGSVGElement>`. This is an established pattern across hundreds of icon files in src/frontend/src/icons/.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: Custom SVG icon components in React should always support both light and dark mode by accepting an 'isdark' prop and adjusting colors accordingly.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/index.tsx : Create an `index.tsx` in your icon directory that exports your icon using `forwardRef` and passes the `isdark` prop.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/*.jsx : Always support both light and dark mode for custom icons by using the `isdark` prop in your SVG component.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-30T14:40:50.846Z
Learning: Applies to src/frontend/src/icons/*/* : Create a new directory for your icon in `src/frontend/src/icons/YourIconName/` and add your SVG as a React component (e.g., `YourIconName.jsx`) that uses the `isdark` prop to support both light and dark mode.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: Error handling for API calls in React should be abstracted into custom hooks (e.g., useApi), which manage loading and error state and expose an execute function for invoking the API.
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (5)
undefined
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (7)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (5)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (7)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (4)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (6)
undefined
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (7)
undefined
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)
undefined
<retrieved_learning>
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the module_name parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/init.py : Update init.py with alphabetical imports when adding new components
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
</retrieved_learning>
<retrieved_learning>
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
</retrieved_learning>
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (8)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-30T14:40:02.682Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use consistent terminology: always capitalize 'Langflow', 'Component', and 'Flow' when referring to Langflow concepts; always uppercase 'API' and 'JSON'.
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-06-30T14:41:58.849Z
Learning: Applies to {src/backend/tests/**/*.py,tests/**/*.py} : Use 'MockLanguageModel' for testing language model components without external API calls.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).
src/backend/base/langflow/base/agents/crewai/crew.py (3)
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Implement async component methods using async def and await for asynchronous operations
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/
src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (3)
Learnt from: ogabrielluiz
PR: langflow-ai/langflow#0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Applies to src/backend/base/langflow/components/**/__init__.py : Update __init__.py with alphabetical imports when adding new components
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-06-30T14:39:17.464Z
Learning: Starter project files are auto-formatted after langflow run; these changes can be committed or ignored
🧬 Code Graph Analysis (7)
src/backend/base/langflow/base/models/model.py (1)
src/backend/base/langflow/graph/vertex/vertex_types.py (1)
stream(372-454)
src/backend/base/langflow/components/crewai/crewai.py (1)
src/backend/base/langflow/base/agents/crewai/crew.py (1)
build_output(221-231)
src/backend/base/langflow/components/crewai/sequential_task.py (1)
src/backend/base/langflow/base/agents/crewai/tasks.py (1)
SequentialTask(7-8)
src/backend/base/langflow/services/auth/utils.py (2)
src/backend/base/langflow/services/database/models/user/model.py (1)
UserRead(62-72)src/backend/base/langflow/services/database/models/api_key/crud.py (1)
check_key(52-61)
src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)
src/frontend/src/customization/utils/custom-get-host-protocol.ts (1)
customGetHostProtocol(1-6)
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (6)
src/frontend/src/utils/utils.ts (1)
getOS(888-904)src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx (1)
getNewPythonApiCode(3-57)src/frontend/src/modals/apiModal/utils/get-js-api-code.tsx (1)
getNewJsApiCode(13-58)src/frontend/src/modals/apiModal/utils/get-curl-code.tsx (1)
getNewCurlCode(41-117)src/frontend/src/components/ui/button.tsx (1)
Button(133-133)src/frontend/src/components/ui/tabs-button.tsx (3)
Tabs(54-54)TabsList(54-54)TabsTrigger(54-54)
src/backend/base/langflow/base/agents/crewai/crew.py (3)
src/backend/base/langflow/schema/data.py (1)
Data(23-277)src/backend/base/langflow/components/crewai/sequential_crew.py (3)
get_tasks_and_agents(23-29)build_crew(31-52)agents(19-21)src/backend/base/langflow/components/crewai/hierarchical_crew.py (1)
build_crew(22-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (60)
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: autofix
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Update Starter Projects
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Call Docker Build Workflow for Langflow Base / build
🔇 Additional comments (82)
src/frontend/src/modals/apiModal/index.tsx (1)
138-138: LGTM! Good UX improvement.The addition of
select-noneprevents unwanted text selection when users interact with the tweaks button, improving the overall user experience.src/frontend/src/style/applies.css (1)
908-908: LGTM! CSS simplification aligns with tab UI updates.Removing the negative top margin and bottom padding simplifies the styling while maintaining the essential layout properties. This change appears coordinated with the broader API modal tab refactoring.
src/frontend/tests/extended/features/pythonApiGeneration.spec.ts (1)
14-14: LGTM! Improved test selector reliability.Switching from role-based to test ID selector makes the test more stable and consistent with the updated UI. Test IDs are less fragile than role-based selectors and improve test maintainability.
src/frontend/tests/extended/features/curlApiGeneration.spec.ts (1)
14-14: LGTM! Consistent test selector improvement.The change to use test ID selector aligns with the standardization effort across test files and improves test reliability. Good consistency with the Python tab test changes.
src/frontend/tests/extended/regression/generalBugs-shard-3.spec.ts (1)
102-102: LGTM! Completes consistent test selector standardization.This change completes the coordinated effort to standardize tab selection across all test files using test IDs instead of role-based selectors, improving overall test suite reliability.
src/frontend/tests/core/features/tweaksTest.spec.ts (1)
15-15: LGTM! Improved test stability with test ID selectors.The change from role-based selectors to test ID selectors is a good practice that makes tests more resilient to UI changes.
Also applies to: 38-38
src/frontend/src/modals/apiModal/codeTabs/code-tabs.tsx (1)
1-222: Well-implemented platform-specific cURL generation with improved UI!The refactoring successfully introduces platform-specific cURL command generation with an intuitive UI. The string-based tab selection and test ID attributes improve maintainability and test stability.
src/backend/base/pyproject.toml (1)
3-3: Version bump looks good.The version update from 0.4.3 to 0.5.0 follows semantic versioning and aligns with the coordinated release across the project components.
src/frontend/package.json (1)
3-3: Version synchronization looks good.The version update from 1.4.3 to 1.5.0 properly synchronizes with the backend version updates and follows semantic versioning conventions.
src/backend/tests/unit/api/v1/test_starter_projects.py (1)
9-9: Enhanced test diagnostics - good improvement.Adding
response.textas a failure message provides valuable debugging information when the assertion fails, making it easier to understand what went wrong with the API call..github/workflows/release.yml (1)
54-54: Proper secrets inheritance configuration.Adding
secrets: inheritenables the CI job to access repository secrets, which is essential for secure testing and deployment operations. This follows GitHub Actions best practices.src/backend/base/langflow/base/data/docling_utils.py (1)
29-33: Excellent error message enhancement.The improved error message provides clear, actionable guidance to users by explaining the likely cause (input not being a DoclingDocument) and suggesting a solution (using the Docling component). This significantly improves the debugging experience.
src/backend/base/langflow/utils/constants.py (2)
12-12: LGTM - Adding gpt-4o-mini to chat models.The addition of "gpt-4o-mini" to the chat models list is correct and follows the established pattern.
21-30: REASONING_OPENAI_MODELS list validatedAll four models—
o3,o3-pro,o4-mini, ando4-mini-high—are official OpenAI reasoning models released in 2025 and available via the Chat Completions and Responses APIs. No changes to the constant are required.src/backend/base/langflow/utils/async_helpers.py (1)
22-42: Well-designed solution for event loop conflicts.The updated implementation properly handles the case where an event loop is already running by creating a new thread with its own event loop. This prevents the
RuntimeErrorthat would occur when callingrun_until_completeon an active loop.The approach is sound:
- Proper exception handling for detecting running loops
- Clean event loop lifecycle management with try/finally
- Appropriate use of ThreadPoolExecutor for thread management
src/backend/base/langflow/components/docling/export_docling_document.py (3)
1-1: LGTM - Adding required import for type hints.The
Anyimport is needed for the type annotation in theupdate_build_configmethod.
32-32: Good UI improvement with real-time refresh.Adding
real_time_refresh=Trueenables immediate UI updates when the export format changes, improving user experience.
72-86: Well-implemented dynamic UI configuration.The
update_build_configmethod properly implements dynamic UI behavior:
- Markdown format shows relevant markdown-specific fields
- HTML format shows only image_mode (no markdown placeholders)
- Plaintext/DocTags hide all format-specific options
This follows good UI principles by showing only relevant fields based on context.
src/backend/tests/unit/test_async_helpers.py (1)
1-196: Excellent comprehensive test suite for async helpers.This test suite thoroughly validates the updated
run_until_completefunction with comprehensive coverage:Strong points:
- Tests both sync and async execution paths as required by coding guidelines
- Covers edge cases like thread isolation, concurrent execution, and timeout handling
- Proper exception propagation testing
- Performance impact validation
- Well-structured with clear docstrings explaining test purposes
Test coverage includes:
- Basic functionality with/without event loops
- Exception handling across thread boundaries
- Thread-local data isolation
- Concurrent execution scenarios
- Nested async operations
- Timeout behavior
- Performance constraints
The tests align perfectly with the coding guidelines for comprehensive component testing and proper async test patterns.
src/backend/base/langflow/components/crewai/hierarchical_task.py (1)
10-10: LGTM - Adding legacy status marker.The addition of
legacy = Trueappropriately marks this component as legacy, which aligns with the broader CrewAI component refactoring mentioned in the AI summary for safer optional dependency handling.src/backend/base/langflow/components/icosacomputing/combinatorial_reasoner.py (1)
4-4: LGTM: Proper model categorization alignment.The import and usage of
OPENAI_CHAT_MODEL_NAMESinstead ofOPENAI_MODEL_NAMEScorrectly aligns with the broader OpenAI model categorization changes, ensuring this component specifically uses chat models as intended.Also applies to: 46-47
src/backend/base/langflow/components/data/url.py (3)
21-23: LGTM: Regex consolidation maintains functionality.The URL regex pattern consolidation to a single raw string improves readability without altering the validation logic.
245-245: Reduced logging verbosity for URL processing.Changed from
logger.infotologger.debugfor URL listing and document count messages, reducing default log verbosity. This is appropriate for operational information that's useful for debugging but not essential for normal operation.Also applies to: 252-252, 262-262
129-129: No hardcoded User-Agent detected; dynamic header remainsAll usages of
get_settings_service().settings.user_agentare still present in:
- src/backend/base/langflow/components/data/api_request.py
- src/backend/base/langflow/components/data/url.py
- src/backend/base/langflow/components/data/web_search.py
No occurrences of a fixed
"langflow"User-Agent were introduced. The original concern can be disregarded.Likely an incorrect or invalid review comment.
src/backend/base/langflow/utils/util.py (1)
385-385: LGTM: Proper integration of reasoning models.Adding
"ReasoningOpenAI": constants.REASONING_OPENAI_MODELSto the options_map enables UI components to properly populate model options for reasoning models, maintaining consistency with the existing pattern for other OpenAI model categories.src/backend/base/langflow/base/agents/crewai/tasks.py (1)
1-4: LGTM: Proper optional dependency handling.The try-except import pattern with fallback to
Task = objectensures the module loads successfully even whencrewaiis not installed, allowing graceful degradation. This aligns with the broader changes to makecrewaian optional dependency.pyproject.toml (2)
3-3: LGTM: Coordinated version bump.The version updates from 1.4.3 to 1.5.0 for the main package and 0.4.3 to 0.5.0 for langflow-base indicate a coordinated minor release with aligned versioning.
Also applies to: 20-20
107-107: LGTM: Consistent optional dependency approach.Commenting out the
crewaidependency while updating the version requirement to>=0.126.0aligns with the codebase changes that make crewai an optional dependency through graceful import fallbacks.src/backend/tests/unit/components/agents/test_agent_component.py (1)
11-11: LGTM: Model constants updated correctly.The import and test usage have been properly updated to use the new
OPENAI_CHAT_MODEL_NAMESconstant, maintaining the same test coverage while aligning with the broader refactoring that distinguishes between OpenAI chat and reasoning models.Also applies to: 145-145
src/backend/base/langflow/components/crewai/sequential_task.py (2)
10-10: LGTM: Legacy marking added consistently.The
legacy = Trueattribute aligns with the broader pattern of marking CrewAI components as legacy, as described in the AI summary.
69-69: LGTM: Variable shadowing resolved.Changing the variable name from
tasktotask_itemin the list comprehension is a good improvement that avoids shadowing the outertaskvariable (the SequentialTask instance being created on line 59).src/backend/base/langflow/base/models/model.py (2)
89-89: LGTM: Code simplification improvement.Directly passing instance attributes to
get_chat_resultremoves unnecessary intermediate variables and makes the code more concise while maintaining readability.
176-178: LGTM: Valuable documentation added.The comment about NVIDIA reasoning models using detailed thinking provides important context for understanding the conditional logic that prepends the
DETAILED_THINKING_PREFIXwhen thedetailed_thinkingattribute is set.src/backend/tests/unit/components/models/test_language_model_component.py (1)
9-9: LGTM: Test updated for model constant refactoring.The import and test assertions have been properly updated to use the new separate
OPENAI_CHAT_MODEL_NAMESandOPENAI_REASONING_MODEL_NAMESconstants. The test now correctly validates that the component handles both chat and reasoning models, and the default value appropriately uses the first chat model.Also applies to: 69-70
src/backend/base/langflow/custom/custom_component/component.py (1)
903-905: LGTM! Excellent defensive programming improvement.The change from direct dictionary access to
.get("return")preventsKeyErrorexceptions when methods lack return type annotations. The explicitNonecheck with an empty list fallback is appropriate and aligns with the broader error handling improvements in this PR.src/backend/tests/unit/custom/custom_component/test_component.py (2)
12-18: Good implementation of optional dependency handling.The conditional import pattern with a boolean flag is clean and follows best practices for handling optional dependencies in tests.
28-28: Proper test skipping for missing dependencies.Using
pytest.mark.skipifwith a clear reason message is the standard approach for handling optional dependencies in tests, following the coding guidelines for external API tests.src/backend/base/langflow/components/crewai/sequential_task_agent.py (2)
11-11: Consistent legacy marking across CrewAI components.The
legacy = Trueattribute is appropriately added, maintaining consistency with other CrewAI components in this refactoring.
107-111: Excellent deferred import pattern with clear error messaging.Moving the CrewAI imports inside the method with try-except handling is the right approach for optional dependencies. The error message provides clear installation instructions using the project's preferred package manager.
src/backend/base/langflow/components/crewai/crewai.py (2)
23-23: Consistent legacy marking maintained across CrewAI components.
82-87: Proper implementation of deferred imports for optional dependencies.The try-except block with clear installation instructions follows the established pattern. Removing the return type annotation is correct since
Agentis no longer available at module level.src/backend/base/langflow/components/crewai/hierarchical_crew.py (2)
12-12: Consistent legacy marking applied across CrewAI components.
22-27: Well-implemented deferred import pattern for multiple CrewAI classes.The try-except block properly handles the import of both
CrewandProcessclasses, with a clear error message guiding users to install the required dependency. Removing the return type annotation is the correct approach when the type is no longer available at module level.src/backend/base/langflow/base/models/openai_constants.py (4)
20-26: LGTM: New reasoning models added correctly.The addition of the new reasoning models (o1-mini, o1-pro, o3-mini, o3, o3-pro, o4-mini, o4-mini-high) follows the established pattern and correctly uses the
reasoning=Trueflag to categorize them appropriately.
37-43: LGTM: Search model addition is well-formatted.The new
gpt-4o-search-previewmodel entry is correctly configured with the appropriate flags (search=True, preview=True, tool_calling=True).
61-67: Excellent refactoring of filtering logic.The improved filtering logic correctly excludes
not_supportedmodels before filtering out reasoning and search models. This ensures that unsupported models are properly excluded from the chat model list, which is a more robust approach than the previous implementation.
90-90: LGTM: Backward compatibility alias updated correctly.The
MODEL_NAMESalias is correctly updated to point to the newOPENAI_CHAT_MODEL_NAMESconstant, maintaining backward compatibility while aligning with the refactoring.src/backend/base/langflow/components/crewai/sequential_crew.py (3)
11-11: LGTM: Legacy flag addition is appropriate.The
legacy = Trueattribute is correctly added, indicating this component follows the legacy pattern and aligns with the broader refactoring across CrewAI components.
19-19: LGTM: Return type generalization is necessary.The return type annotations are appropriately generalized from specific
AgentandTasktypes to genericlistandtuple[list, list]types. This change is necessary since the specific CrewAI types are no longer imported at module level.Also applies to: 23-23
32-36: Excellent improvement in optional dependency handling.The move of
crewaiimports inside the method with proper try-except error handling is a great improvement. The error message is clear and actionable, providing the exact command (uv pip install crewai) users need to run to resolve the dependency issue.src/backend/base/langflow/services/auth/utils.py (4)
62-66: LGTM: Improved control flow for AUTO_LOGIN handling.The refactored control flow correctly handles the case when
AUTO_LOGINis enabled and no API key is provided. The immediate return forskip_auth_auto_loginand the clear error handling for the alternative case improve code readability and maintainability.
67-67: LGTM: Simplified API key validation logic.The consolidation of API key checking into a single call using
query_param or header_paramis cleaner and more maintainable than the previous implementation.
76-76: LGTM: Consistent API key checking logic.The unified approach to checking API keys using the same
query_param or header_parampattern as the AUTO_LOGIN flow above creates consistency and reduces code duplication.
83-86: LGTM: Improved flow control and validation.The positioning of the validation logic and the clean return statement improve the overall flow of the function. The
isinstance(result, User)check ensures type safety before conversion toUserRead.src/backend/base/langflow/initial_setup/starter_projects/Simple Agent.json (1)
1596-1596: URLComponent review – logging OK, regex present, User-Agent header still dynamic
- URL_REGEX is defined at line 20 and used in
validate_url(line 182); consolidation appears applied.- Logging calls use
logger.debug(lines 245, 252, 262) with no remaininginfolevel logs.- The User-Agent header remains dynamic via
get_settings_service().settings.user_agent(line 129), not fixed to"langflow"as mentioned in the AI summary.Please either:
- Update the code to set the User-Agent header to the fixed value
"langflow"
OR- Correct the AI summary/review comment to reflect that the header is still dynamic.
⛔ Skipped due to learnings
Learnt from: edwinjosechittilappilly PR: langflow-ai/langflow#8504 File: src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json:391-393 Timestamp: 2025-06-12T15:25:01.072Z Learning: The repository owner prefers CodeRabbit not to review or comment on JSON files because they are autogenerated.src/backend/base/langflow/components/openai/openai_chat_model.py (7)
8-8: LGTM - Import statement updated for model categorization.The import change from
OPENAI_MODEL_NAMEStoOPENAI_CHAT_MODEL_NAMESaligns with the broader refactoring to distinguish between chat and reasoning models.
48-49: LGTM - Model options expanded with appropriate default.The combination of
OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMESprovides comprehensive model selection, and defaulting toOPENAI_CHAT_MODEL_NAMES[0]is sensible as chat models are more commonly used.
100-100: LGTM - Debug logging enhances traceability.The debug logging provides useful insight into which model is being executed, which aids in troubleshooting and monitoring.
111-112: LGTM - Clear documentation of reasoning model limitations.The TODO comment provides valuable context for future development, and the explicit list of unsupported parameters for reasoning models is clear and accurate.
117-119: LGTM - Proper handling of reasoning model constraints.The logic correctly excludes
temperatureandseedparameters for reasoning models with informative debug logging explaining why these parameters are ignored.
150-152: LGTM - Appropriate UI handling for o1 model constraints.The logic correctly hides the
system_messageinput for o1 models, which is appropriate since these reasoning models don't support system messages. The prefix check ensures all o1 variants are handled correctly.
153-157: LGTM - Correct UI field visibility for chat models.The logic properly shows all parameter inputs (
temperature,seed,system_message) for chat models, as these models support all these configuration options.src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
1082-1082: Confirmtemperature=Nonesupport in ChatOpenAIWe weren’t able to locate the
ChatOpenAI.__init__signature in the local codebase to verify thattemperature=Noneis accepted. Please:
- Check your lockfile (e.g., poetry.lock or requirements.txt) for the pinned
langchain-openaiversion.- Confirm that its
ChatOpenAIconstructor allowstemperature=None.- If it doesn’t, update the reasoning-model branch in
build_model(around Financial Report Parser.json:1082) to use a numeric default (e.g.,0) instead ofNone.src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1)
1030-1038: Verify thatChatGoogleGenerativeAIsupports thestreamingparameter.Some released versions of
langchain_google_genaido not yet implement streaming for Gemini; passingstreaming=may raiseTypeError: __init__() got an unexpected keyword argument.
Please confirm the package version inpoetry.lock/requirements.txtor gate the argument behind a feature check.src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
2275-2366: Passingtemperature=Nonemay breakChatOpenAI
ChatOpenAIexpectstemperatureto be a float (0-2).
Setting it toNonefor reasoning models can raise a type error in the LangChain wrapper. Prefer omitting the kwarg altogether:if model_name in OPENAI_REASONING_MODEL_NAMES: - temperature = None + temperature = 0 # or drop the argument entirely via **kwargs filteringOr build kwargs dynamically:
params = dict(model_name=model_name, streaming=stream, openai_api_key=self.api_key) if temperature is not None: params["temperature"] = temperature return ChatOpenAI(**params)src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
4586-4586: Verifystreamingparameter support in ChatGoogleGenerativeAIWe attempted to inspect the constructor in this sandbox, but
langchain_google_genaiisn’t installed here. Please confirm in your local environment thatChatGoogleGenerativeAIaccepts thestreamingkwarg. If it doesn’t, update the Google branch inbuild_modelto only passstreamingwhen supported—for example:from inspect import signature # inside build_model(), when provider == "Google": params = {"model": model_name, "temperature": temperature} if "streaming" in signature(ChatGoogleGenerativeAI).parameters: params["streaming"] = stream return ChatGoogleGenerativeAI(**params, google_api_key=self.api_key)src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)
1410-1440:o1-prefix check seems stale / never triggered
update_build_confighidessystem_messagewhenmodel_name.startswith("o1"), yet the canonical lists now shipgpt-4o*,gpt-4.*, etc. Unless there are still internal “o1” models in use, this branch will never execute – the UI control will stay visible and may confuse users about unsupported behaviour.Verify the intended prefix or replace with a safer capability flag (e.g. a dedicated
SUPPORTS_SYSTEM_MESSAGEset).src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (1)
2644-2660: Verifystreamingparameter forChatGoogleGenerativeAI
ChatGoogleGenerativeAI(langchain-google-genai) historically usesstreamrather thanstreaming. Passing an unknown kwarg will raiseTypeError.Please double-check the current signature and, if necessary, adjust:
- return ChatGoogleGenerativeAI( - model=model_name, - temperature=temperature, - streaming=stream, - google_api_key=self.api_key, - ) + return ChatGoogleGenerativeAI( + model=model_name, + temperature=temperature, + stream=stream, + google_api_key=self.api_key, + )Would you run a quick grep or unit test against the current dependency version to confirm which keyword is accepted?
src/backend/base/langflow/components/models/language_model.py (5)
10-10: LGTM! Import updated to distinguish between chat and reasoning models.The import change from
OPENAI_MODEL_NAMESto separateOPENAI_CHAT_MODEL_NAMESandOPENAI_REASONING_MODEL_NAMESprovides better model categorization and aligns with the broader OpenAI model support updates.
39-42: LGTM! Model dropdown properly combines both model types.The model dropdown now correctly combines both chat and reasoning models, defaults to the first chat model, and includes real-time refresh for dynamic UI updates.
61-61: LGTM! System message input made more accessible.Changing
advanced=Falsemakes the system message input visible by default, improving user experience for this commonly used parameter.
127-128: LGTM! Build config update consistent with model separation.The update to use both model lists in the build configuration is consistent with the earlier changes and maintains the default selection of the first chat model.
91-93: Please verify ChatOpenAI handlesNonetemperature without errorsI wasn’t able to locate the
ChatOpenAIconstructor in the repo—ensure that passingtemperature=Noneintolangchain.chat_models.ChatOpenAI(or your local wrapper) won’t trigger a type/validation error. You may need to:
- Inspect the
__init__signature ofChatOpenAIin your LangChain version- Or add a quick unit test instantiating it with
temperature=NoneThis affects:
- src/backend/base/langflow/components/models/language_model.py (around lines 91–93)
If
Noneisn’t accepted, consider filtering it out or supplying a fallback value.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2030-2080:temperature=Nonemight be invalid forChatOpenAI
build_model()forcestemperature = Nonefor reasoning models and then passes it straight toChatOpenAI.
Up-to-datelangchain-openaiversions still type‐hinttemperatureasfloat, and runtime validation rejectsNone. This will raise aValidationError(pydantic) orTypeError, breaking flow execution as soon as a reasoning model is selected.- if model_name in OPENAI_REASONING_MODEL_NAMES: - # reasoning models do not support temperature (yet) - temperature = None + if model_name in OPENAI_REASONING_MODEL_NAMES: + # Reasoning models ignore temperature; keep default instead of None + temperature = 0.0Please confirm against the exact
langchain-openaiversion shipped inrequirements.txt.
If that version already acceptsNone, ignore; otherwise adjust as above or omit the arg entirely whenNone.src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)
995-995: Guard against stale model selections when provider changes
update_build_configresetsmodel_name.valueto the first option of the new provider, but any downstream nodes or cached configs may still hold the old (now invalid) value.
Consider returning a companion list of invalidated fields or emitting a validation warning so the frontend knows to re-sync dependent values.src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)
1931-1931: LGTM! Proper handling of OpenAI reasoning models implemented.The embedded LanguageModelComponent code correctly implements the distinction between OpenAI chat and reasoning models. Key improvements:
- Correct model imports: Uses
OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMESfor comprehensive model options- Temperature handling: Properly sets
temperature = Nonefor reasoning models since they don't support temperature yet- UI adaptations: Dynamically hides
system_messagefor o1 models which don't support system messages- Consistent patterns: Aligns with the broader refactoring described in the AI summary
The logic correctly identifies reasoning models using
model_name in OPENAI_REASONING_MODEL_NAMESand the UI updates appropriately respond to model selection changes.src/backend/base/langflow/base/agents/crewai/crew.py (5)
44-44: LGTM! Proper deferred import pattern implemented.The changes to
convert_llmfunction correctly implement the deferred import pattern:
- Parameter generalization: Changed from specific type to
Anyto avoid import dependencies- Runtime import: Moved
from crewai import LLMinside the function with proper error handling- Clear error message: Provides actionable guidance when CrewAI is not installed
This aligns with the same pattern used in other CrewAI components shown in the relevant code snippets.
Also applies to: 54-58
114-118: LGTM! Consistent deferred import for tools conversion.The
convert_toolsfunction properly implements the same deferred import pattern:
- Runtime import: Moves
from crewai.tools.base_tool import Toolinside the function- Error handling: Consistent error message format with installation instructions
- Graceful degradation: Function can be defined without requiring CrewAI at module load time
153-153: LGTM! Type annotation removal improves dependency management.The removal of explicit type annotations on class methods is appropriate for this refactoring:
- Reduced dependencies: Eliminates need to import CrewAI types at module level
- Maintained functionality: Methods still work correctly with duck typing
- Consistent pattern: Aligns with similar changes across other CrewAI components
This change supports the goal of making CrewAI an optional dependency while maintaining backward compatibility.
Also applies to: 155-155, 158-158, 171-171, 179-179
186-190: LGTM! Proper error handling for task callback.The
get_task_callbackmethod correctly implements deferred import:
- Runtime import: Moves
from crewai.task import TaskOutputinside the method- Clear error message: Provides installation instructions when CrewAI is missing
- Functional preservation: Callback functionality remains intact when dependencies are available
201-206: LGTM! Comprehensive deferred import for step callback.The
get_step_callbackmethod properly implements the deferred import pattern:
- Runtime import: Moves
from langchain_core.agents import AgentFinishinside the method- Dual dependency handling: Addresses both CrewAI and langchain_core dependencies
- Parameter generalization: Removes specific type annotation for
agent_outputparameter- Consistent error messaging: Provides clear installation instructions
This completes the pattern of making all CrewAI-related functionality work with optional dependencies.
Also applies to: 207-207
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)
1190-1195: Depth slider now allows up to 10 but keepsstep_type="float"
step_typeis"float"whilestepis1. Using a float step for an integer-only domain is misleading and may break validation in some front-end widgets.- "step": 1, - "step_type": "float" + "step": 1, + "step_type": "int"Also consider adding a short warning in the
infofield about the exponential crawl cost beyond depth 5.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
system_message may remain hidden after provider switches away from OpenAI
update_build_config hides system_message when an OpenAI model starting with o1 is chosen, but the provider branch doesn’t explicitly reset the flag when the user later changes provider (e.g., to Anthropic).
Because UI updates fire per-field, the previous hidden state can persist, making the field permanently invisible.
Add a reset in the "provider" section:
elif field_name == "provider":
...
+ # always ensure system_message is visible when leaving OpenAI/o1 context
+ if "system_message" in build_config:
+ build_config["system_message"]["show"] = True📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | |
| def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict: | |
| if field_name == "provider": | |
| if field_value == "OpenAI": | |
| build_config["model_name"]["options"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES | |
| build_config["model_name"]["value"] = OPENAI_CHAT_MODEL_NAMES[0] | |
| build_config["api_key"]["display_name"] = "OpenAI API Key" | |
| elif field_value == "Anthropic": | |
| build_config["model_name"]["options"] = ANTHROPIC_MODELS | |
| build_config["model_name"]["value"] = ANTHROPIC_MODELS[0] | |
| build_config["api_key"]["display_name"] = "Anthropic API Key" | |
| elif field_value == "Google": | |
| build_config["model_name"]["options"] = GOOGLE_GENERATIVE_AI_MODELS | |
| build_config["model_name"]["value"] = GOOGLE_GENERATIVE_AI_MODELS[0] | |
| build_config["api_key"]["display_name"] = "Google API Key" | |
| # always ensure system_message is visible when leaving OpenAI/o1 context | |
| if "system_message" in build_config: | |
| build_config["system_message"]["show"] = True | |
| elif field_name == "model_name" and field_value.startswith("o1") and self.provider == "OpenAI": | |
| # Hide system_message for o1 models - currently unsupported | |
| if "system_message" in build_config: | |
| build_config["system_message"]["show"] = False | |
| elif field_name == "model_name" and not field_value.startswith("o1") and "system_message" in build_config: | |
| build_config["system_message"]["show"] = True | |
| return build_config |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Financial Report
Parser.json at line 1082, the update_build_config method hides the
system_message field when an OpenAI model starting with "o1" is selected but
does not reset this visibility when the provider changes to Anthropic or Google.
To fix this, add logic in the "provider" field_name branch to explicitly set
build_config["system_message"]["show"] to True when the provider switches away
from OpenAI, ensuring the system_message field becomes visible again.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Passing temperature=None can break ChatOpenAI initialisation
build_model() sets temperature = None for reasoning models and still passes it:
return ChatOpenAI(
model_name=model_name,
temperature=temperature, # may be None
streaming=stream,
openai_api_key=self.api_key,
)langchain_openai.ChatOpenAI expects temperature: float (no Optional[float] in the current stub/impl).
If the runtime implementation rejects None, the component will raise at execution time.
- if model_name in OPENAI_REASONING_MODEL_NAMES:
- # reasoning models do not support temperature (yet)
- temperature = None
-
- return ChatOpenAI(
- model_name=model_name,
- temperature=temperature,
- streaming=stream,
- openai_api_key=self.api_key,
- )
+ kwargs: dict[str, Any] = {
+ "model_name": model_name,
+ "streaming": stream,
+ "openai_api_key": self.api_key,
+ }
+ # Reasoning models ignore temperature – omit the field entirely
+ if model_name not in OPENAI_REASONING_MODEL_NAMES:
+ kwargs["temperature"] = temperature
+
+ return ChatOpenAI(**kwargs)This avoids a hard failure while still honouring temperature for chat-capable models.
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Research Translation
Loop.json at line 1405, the build_model method passes temperature=None to
ChatOpenAI for reasoning models, but ChatOpenAI expects a float and may fail if
given None. To fix this, modify the code to only include the temperature
argument when it is not None, avoiding passing temperature=None to ChatOpenAI
while preserving temperature usage for chat-capable models.
🛠️ Refactor suggestion
update_build_config can leave system_message permanently hidden when switching providers
Hiding/showing the system_message input is only triggered on model_name changes:
elif field_name == "model_name" and field_value.startswith("o1") ...If a user selects an o1 model (hiding the field) and then switches the provider to Anthropic/Google, the flag is never reset, so system_message stays invisible even though those providers support it.
Quick fix:
if field_name == "provider":
...
+ # Always re-enable system_message when leaving OpenAI or changing provider
+ if "system_message" in build_config:
+ build_config["system_message"]["show"] = True(You may also want to hide the temperature slider for reasoning models in the same callback to keep the UI consistent.)
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Research Translation
Loop.json at line 1405, the update_build_config method only toggles the
visibility of the system_message input when the model_name changes, causing it
to remain hidden if the provider changes after selecting an o1 model. To fix
this, add logic in the update_build_config method to reset the system_message
visibility appropriately when the provider changes, ensuring it is shown for
Anthropic and Google providers. Additionally, consider adding similar logic to
hide or show the temperature slider for reasoning models to maintain UI
consistency.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
o1 prefix heuristic is brittle
Hiding system_message only when model_name.startswith("o1") risks missing other models (e.g., gpt-4o, future o2.*) that also lack system-message support. Maintain an explicit deny-list or capability map in openai_constants instead of relying on a hard-coded prefix.
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Vector Store
RAG.json at line 4586, the code hides the system_message input based on the
model_name starting with "o1", which is a brittle heuristic. To fix this,
replace the prefix check with a lookup against an explicit deny-list or
capability map defined in openai_constants that lists models without
system_message support. Update the condition to check if the model_name is in
that deny-list instead of using startswith("o1").
Avoid passing temperature=None to ChatOpenAI
langchain_openai.ChatOpenAI expects temperature to be a float in [0,2]. Passing None can raise a pydantic ValidationError at runtime. Skip the parameter when it’s not applicable instead of passing None.
- return ChatOpenAI(
- model_name=model_name,
- temperature=temperature,
- streaming=stream,
- openai_api_key=self.api_key,
- )
+ openai_kwargs = {
+ "model_name": model_name,
+ "streaming": stream,
+ "openai_api_key": self.api_key,
+}
+ if temperature is not None:
+ openai_kwargs["temperature"] = temperature
+ return ChatOpenAI(**openai_kwargs)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | |
| if provider == "OpenAI": | |
| if not self.api_key: | |
| msg = "OpenAI API key is required when using OpenAI provider" | |
| raise ValueError(msg) | |
| if model_name in OPENAI_REASONING_MODEL_NAMES: | |
| # reasoning models do not support temperature (yet) | |
| temperature = None | |
| - return ChatOpenAI( | |
| - model_name=model_name, | |
| - temperature=temperature, | |
| - streaming=stream, | |
| - openai_api_key=self.api_key, | |
| - ) | |
| + openai_kwargs = { | |
| + "model_name": model_name, | |
| + "streaming": stream, | |
| + "openai_api_key": self.api_key, | |
| + } | |
| + if temperature is not None: | |
| + openai_kwargs["temperature"] = temperature | |
| + return ChatOpenAI(**openai_kwargs) |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Vector Store
RAG.json at line 4586, the build_model method passes temperature=None to
ChatOpenAI when using reasoning models, which causes a validation error. To fix
this, modify the code to omit the temperature parameter entirely from the
ChatOpenAI constructor when temperature is None, instead of passing
temperature=None.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong keyword for ChatAnthropic instantiation
ChatAnthropic expects the argument model_name, not model. Using the wrong keyword will raise a TypeError at runtime when users switch the provider to Anthropic.
- return ChatAnthropic(
- model=model_name,
+ return ChatAnthropic(
+ model_name=model_name,
temperature=temperature,
streaming=stream,
anthropic_api_key=self.api_key,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | |
| if provider == "Anthropic": | |
| if not self.api_key: | |
| msg = "Anthropic API key is required when using Anthropic provider" | |
| raise ValueError(msg) | |
| return ChatAnthropic( | |
| model_name=model_name, | |
| temperature=temperature, | |
| streaming=stream, | |
| anthropic_api_key=self.api_key, | |
| ) |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword
Generator.json at line 968, the instantiation of ChatAnthropic incorrectly uses
the keyword argument 'model' instead of 'model_name'. To fix this, replace the
'model' keyword with 'model_name' in the ChatAnthropic constructor call to match
its expected parameters and avoid runtime TypeErrors.
💡 Verification agent
🧩 Analysis chain
Double-check that temperature=None is accepted by ChatOpenAI
When a reasoning model is selected you coerce temperature to None, yet you still pass it through to ChatOpenAI. Older versions of langchain_openai expect a real float and will raise:
TypeError: float() argument must be a string or a real number, not 'NoneType'
Please verify the current version’s signature or guard the parameter, e.g.:
kwargs = dict(model_name=model_name, streaming=stream, openai_api_key=self.api_key)
if temperature is not None:
kwargs["temperature"] = temperature
return ChatOpenAI(**kwargs)🌐 Web query:
Does `langchain_openai.ChatOpenAI` accept `temperature=None` without error?
💡 Result:
The behavior of passing temperature=None to langchain_openai.ChatOpenAI depends on the version of LangChain and, crucially, on the specific OpenAI model you are using.
1. Supported Values for temperature Parameter
- Most OpenAI chat models (such as
gpt-3.5-turboandgpt-4) expecttemperatureto be afloatbetween 0 and 2. PassingNonetypically results in notemperatureparameter being sent to the OpenAI API, and the API will use its default value (usually 1 or 0.7, depending on the model)[3][10]. - In recent versions (
langchain-openai >= 0.3), the default fortemperaturewas updated toNoneto avoid compatibility issues with some of the newer "o" series models, which may not accept atemperatureparameter at all[2].
2. Model-specific Limitations
- Some newer OpenAI models, such as
o3-mini, explicitly do not support thetemperatureparameter at all. Passing it (even asNone, if it results in the parameter being sent as"temperature": null) will result in an error from the API:[2][4]BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'temperature' is not supported with this model.", ... }} - If
langchain_openai.ChatOpenAIis implemented such thattemperature=Noneis omitted from the payload, you will not get an error. If the parameter is included asnullorNone, you will get an error for models that strictly disallow it.
3. LangChain's Current Behavior (as of 2024/2025)
- Documentation and community reports indicate that with
langchain-openai ~0.3and up, settingtemperature=Noneis usually safe and will not raise an error for the models that require this behavior, as the parameter will simply be omitted from the API call payload[2][6]. - However, for maximum safety and compatibility, especially with models that do not support the parameter at all, do not set the
temperatureparameter at all—let it default, or set it viamodel_kwargsonly if you know the model accepts it[2][6][10].
If you are using a model like
gpt-4oorgpt-3.5-turbo, passingtemperature=Noneis very unlikely to cause an error.
If you are using a model likeo3-minior another "o" series model, passingtemperature=Noneis acceptable with recent LangChain versions, provided the library omits the parameter in the payload[2][4].
If you passtemperature=0or any float, but the model does not support it, an error will occur[2][4].
Summary Table:
| Model | temperature=None allowed? | temperature float allowed? | Error if param sent? |
|---|---|---|---|
| gpt-3.5-turbo | Yes | Yes | No |
| gpt-4, gpt-4o | Yes | Yes | No |
| o3-mini, o1 | Yes (if omitted) | No | Yes, if parameter is sent |
Best Practice:
Use temperature=None only with recent langchain-openai versions and models that require it; otherwise, omit the parameter entirely[2][6][10]. For most robust code, detect model capabilities and set parameters appropriately.
Citations:
- 1: temperature not work in ChatOpenAI langchain-ai/langchain#2292
- 2: Extend support for OpenAI o3 style models in BaseChatOpenAI class langchain-ai/langchain#29632
- 3: https://gptcache.readthedocs.io/en/latest/bootcamp/temperature/chat.html
- 4: https://community.openai.com/t/o3-mini-unsupported-parameter-temperature/1140846
- 5: set the temperature value to 0, but the response results are different for each run langchain-ai/langchain#3343
- 6: https://python.langchain.com/docs/integrations/chat/openai/
- 7: https://community.openai.com/t/openai-temperature-parameter/287485
- 8: https://community.openai.com/t/gpt-4o-context-window-is-128k-but-getting-error-models-maximum-context-length-is-8192-tokens-however-you-requested-21026-tokens/802809
- 9: https://community.openai.com/t/cheat-sheet-mastering-temperature-and-top-p-in-chatgpt-api/172683
- 10: https://sj-langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html
Ensure ChatOpenAI is not called with temperature=None
Passing temperature=None can lead to malformed payloads or API errors for certain OpenAI models (especially “o-series”). Guard against this by only including the parameter when it’s a real number.
Locations to update:
- File:
src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json - Method:
LanguageModelComponent.build_model()under theprovider == "OpenAI"branch
Suggested change:
- return ChatOpenAI(
- model_name=model_name,
- temperature=temperature,
- streaming=stream,
- openai_api_key=self.api_key,
- )
+ kwargs = {
+ "model_name": model_name,
+ "streaming": stream,
+ "openai_api_key": self.api_key,
+ }
+ if temperature is not None:
+ kwargs["temperature"] = temperature
+ return ChatOpenAI(**kwargs)This ensures temperature is only sent when supported, avoiding null parameters in the API payload.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" | |
| if provider == "OpenAI": | |
| if not self.api_key: | |
| msg = "OpenAI API key is required when using OpenAI provider" | |
| raise ValueError(msg) | |
| if model_name in OPENAI_REASONING_MODEL_NAMES: | |
| # reasoning models do not support temperature (yet) | |
| temperature = None | |
| kwargs = { | |
| "model_name": model_name, | |
| "streaming": stream, | |
| "openai_api_key": self.api_key, | |
| } | |
| if temperature is not None: | |
| kwargs["temperature"] = temperature | |
| return ChatOpenAI(**kwargs) |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword
Generator.json at line 968 inside the LanguageModelComponent.build_model()
method under the provider == "OpenAI" branch, the temperature parameter is
passed directly even when it is None, which can cause API errors. Modify the
code to include the temperature parameter only if it is not None by
conditionally adding it to the ChatOpenAI constructor arguments, ensuring no
temperature=None is sent in the API call.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[0],\n info=\"Select the model to use\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=True,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_anthropic import ChatAnthropic\nfrom langchain_google_genai import ChatGoogleGenerativeAI\nfrom langchain_openai import ChatOpenAI\n\nfrom langflow.base.models.anthropic_constants import ANTHROPIC_MODELS\nfrom langflow.base.models.google_generative_ai_constants import GOOGLE_GENERATIVE_AI_MODELS\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import OPENAI_CHAT_MODEL_NAMES, OPENAI_REASONING_MODEL_NAMES\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs.inputs import BoolInput\nfrom langflow.io import DropdownInput, MessageInput, MultilineInput, SecretStrInput, SliderInput\nfrom langflow.schema.dotdict import dotdict\n\n\nclass LanguageModelComponent(LCModelComponent):\n display_name = \"Language Model\"\n description = \"Runs a language model given a specified provider.\"\n documentation: str = \"https://docs.langflow.org/components-models\"\n icon = \"brain-circuit\"\n category = \"models\"\n priority = 0 # Set priority to 0 to make it appear first\n\n inputs = [\n DropdownInput(\n name=\"provider\",\n display_name=\"Model Provider\",\n options=[\"OpenAI\", \"Anthropic\", \"Google\"],\n value=\"OpenAI\",\n info=\"Select the model provider\",\n real_time_refresh=True,\n options_metadata=[{\"icon\": \"OpenAI\"}, {\"icon\": \"Anthropic\"}, {\"icon\": \"GoogleGenerativeAI\"}],\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n options=OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_CHAT_MODEL_NAMES[0],\n info=\"Select the model to use\",\n real_time_refresh=True,\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"Model Provider API key\",\n required=False,\n show=True,\n real_time_refresh=True,\n ),\n MessageInput(\n name=\"input_value\",\n display_name=\"Input\",\n info=\"The input text to send to the model\",\n ),\n MultilineInput(\n name=\"system_message\",\n display_name=\"System Message\",\n info=\"A system message that helps set the behavior of the assistant\",\n advanced=False,\n ),\n BoolInput(\n name=\"stream\",\n display_name=\"Stream\",\n info=\"Whether to stream the response\",\n value=False,\n advanced=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n info=\"Controls randomness in responses\",\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n advanced=True,\n ),\n ]\n\n def build_model(self) -> LanguageModel:\n provider = self.provider\n model_name = self.model_name\n temperature = self.temperature\n stream = self.stream\n\n if provider == \"OpenAI\":\n if not self.api_key:\n msg = \"OpenAI API key is required when using OpenAI provider\"\n raise ValueError(msg)\n\n if model_name in OPENAI_REASONING_MODEL_NAMES:\n # reasoning models do not support temperature (yet)\n temperature = None\n\n return ChatOpenAI(\n model_name=model_name,\n temperature=temperature,\n streaming=stream,\n openai_api_key=self.api_key,\n )\n if provider == \"Anthropic\":\n if not self.api_key:\n msg = \"Anthropic API key is required when using Anthropic provider\"\n raise ValueError(msg)\n return ChatAnthropic(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n anthropic_api_key=self.api_key,\n )\n if provider == \"Google\":\n if not self.api_key:\n msg = \"Google API key is required when using Google provider\"\n raise ValueError(msg)\n return ChatGoogleGenerativeAI(\n model=model_name,\n temperature=temperature,\n streaming=stream,\n google_api_key=self.api_key,\n )\n msg = f\"Unknown provider: {provider}\"\n raise ValueError(msg)\n\n def update_build_config(self, build_config: dotdict, field_value: Any, field_name: str | None = None) -> dotdict:\n if field_name == \"provider\":\n if field_value == \"OpenAI\":\n build_config[\"model_name\"][\"options\"] = OPENAI_CHAT_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES\n build_config[\"model_name\"][\"value\"] = OPENAI_CHAT_MODEL_NAMES[0]\n build_config[\"api_key\"][\"display_name\"] = \"OpenAI API Key\"\n elif field_value == \"Anthropic\":\n build_config[\"model_name\"][\"options\"] = ANTHROPIC_MODELS\n build_config[\"model_name\"][\"value\"] = ANTHROPIC_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Anthropic API Key\"\n elif field_value == \"Google\":\n build_config[\"model_name\"][\"options\"] = GOOGLE_GENERATIVE_AI_MODELS\n build_config[\"model_name\"][\"value\"] = GOOGLE_GENERATIVE_AI_MODELS[0]\n build_config[\"api_key\"][\"display_name\"] = \"Google API Key\"\n elif field_name == \"model_name\" and field_value.startswith(\"o1\") and self.provider == \"OpenAI\":\n # Hide system_message for o1 models - currently unsupported\n if \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = False\n elif field_name == \"model_name\" and not field_value.startswith(\"o1\") and \"system_message\" in build_config:\n build_config[\"system_message\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix wrong keyword for streaming in Anthropic & Google model builders
ChatAnthropic and ChatGoogleGenerativeAI expect the keyword argument stream, not streaming.
Passing the wrong kwarg will raise TypeError: __init__() got an unexpected keyword argument 'streaming' at runtime.
- return ChatAnthropic(
- model=model_name,
- temperature=temperature,
- streaming=stream,
- anthropic_api_key=self.api_key,
- )
+ return ChatAnthropic(
+ model=model_name,
+ temperature=temperature,
+ stream=stream,
+ anthropic_api_key=self.api_key,
+ )
[...]
- return ChatGoogleGenerativeAI(
- model=model_name,
- temperature=temperature,
- streaming=stream,
- google_api_key=self.api_key,
- )
+ return ChatGoogleGenerativeAI(
+ model=model_name,
+ temperature=temperature,
+ stream=stream,
+ google_api_key=self.api_key,
+ )🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
at line 995, the ChatAnthropic and ChatGoogleGenerativeAI constructors
incorrectly use the keyword argument 'streaming' instead of 'stream'. To fix
this, replace the 'streaming' keyword argument with 'stream' in both
ChatAnthropic and ChatGoogleGenerativeAI model builder calls to avoid the
TypeError at runtime.
Here is a rewritten, **optimized** version of your program, focusing on significant hot spots (from the line profiler) and leveraging locality, reduced allocations, and fast Python idioms. The main bottleneck is **`_find_api_key`**, particularly attribute access and lower-casing/search, and the **dict filtering** at the bottom of `convert_llm`. Key changes for speed. - **`_find_api_key`**. - Use a **cached set of lower patterns** for fast `in` checks. - Scan attributes and look up only **first** string/SecretStr-valued matching attribute—stop early. - Use `model.__dict__` where possible for speed, fallback to `dir()` only if needed, prefer `vars(model)` (which is essentially `__dict__`) for most models. - Minimize repeated operations inside loops. - **`convert_llm`**. - **Precompute** the dict filter set and use **list comprehensions** (Py3.7+ dicts preserve order and are fast). - Inline all known one-time representatives outside repeated control flow. - Only get the dict once. **Summary of speed improvements:** - Use `vars(model)`/`.__dict__` directly if available—faster than `dir()` and less work. - Inline filter for key in attribute (avoiding unnecessary generator). - Attribute access and string lowercasing only occur once per attribute. - `convert_llm` dict filtering is now single-pass. - **Overall effect**: Dramatically reduces function call count, attribute access, and per-item python overhead in both hot spots. If you want even further micro-optimization for `_find_api_key`, you may also break on the first found attribute whose value is not `None`, rather than searching all attributes—but normally there is just one such key so this won't matter for correctness or speed.
⚡️ Codeflash found optimizations for this PR📄 163% (1.63x) speedup for
|
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Style
Tests
Chores