Skip to content

Conversation

@ogabrielluiz
Copy link
Contributor

@ogabrielluiz ogabrielluiz commented Jun 9, 2025

Summary by CodeRabbit

  • New Features

    • Improved support for streaming and iterable text inputs in chat models.
    • Added a method to handle concatenation of content from iterators in messages.
  • Bug Fixes

    • Refined parameter handling for OpenAI chat models, ensuring correct inclusion of temperature and seed settings based on model type.
  • Refactor

    • Simplified and clarified logic for parameter construction in OpenAI model components across multiple starter projects.
    • Removed unnecessary checks for streaming sequence in graph logic.
  • Tests

    • Introduced comprehensive unit and integration tests for the OpenAI model component, covering parameter handling, error cases, and configuration updates.

CleanShot 2025-06-09 at 13 21 53@2x

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jun 9, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 9, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This update refactors how OpenAI model parameters are constructed across multiple components and starter project templates, ensuring temperature and seed are only included for non-reasoning models. It also adds support for iterable message content in chat handling, introduces a helper method for consuming iterators in messages, removes a streaming assertion from the graph logic, and adds comprehensive unit tests for OpenAI model configuration.

Changes

File(s) Change Summary
src/backend/base/langflow/base/models/model.py, src/backend/base/langflow/schema/message.py Expanded _get_chat_result to accept iterators as input and added a consume_iterator method to the Message class for concatenating content from iterators.
src/backend/base/langflow/components/languagemodels/openai_chat_model.py Refactored parameter dictionary construction in OpenAI model component to only add temperature and seed for non-reasoning models.
src/backend/base/langflow/graph/graph/base.py Removed the assert_streaming_sequence method and its invocation in graph building logic.
src/backend/base/langflow/initial_setup/starter_projects/... (all starter project JSONs) Updated the OpenAIModelComponent code in all starter projects to conditionally add temperature and seed only for non-reasoning models, and renamed the API base URL parameter key. No changes to class/method signatures, only internal logic.
src/backend/tests/unit/components/languagemodels/test_openai_model.py Added a comprehensive test suite for OpenAIModelComponent, covering parameter construction, JSON mode, error handling, and build config updates.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant OpenAIModelComponent
    participant ChatOpenAI

    User->>OpenAIModelComponent: build_model()
    OpenAIModelComponent->>OpenAIModelComponent: Construct parameters dict
    alt model is not reasoning model
        OpenAIModelComponent->>OpenAIModelComponent: Add temperature and seed
    end
    OpenAIModelComponent->>ChatOpenAI: Instantiate with parameters
    ChatOpenAI-->>OpenAIModelComponent: Model instance
    OpenAIModelComponent-->>User: Return model
Loading
sequenceDiagram
    participant External
    participant LCModelComponent
    participant Message

    External->>LCModelComponent: _get_chat_result(input_value)
    alt input_value is Message and input_value.text is iterator
        LCModelComponent->>Message: consume_iterator(input_value.text)
        Message-->>LCModelComponent: Concatenated string
        LCModelComponent->>LCModelComponent: Replace input_value.text with string
    end
    LCModelComponent-->>External: Chat result
Loading

Suggested labels

bug, lgtm, fix for release

✨ Finishing Touches
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in Comment
  • Commit Unit Tests in branch improve-streaming

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@coderabbitai coderabbitai bot changed the title @coderabbitai refactor(openai): update model parameter handling and add iterable message support Jun 9, 2025
@github-actions github-actions bot added the refactor Maintenance tasks and housekeeping label Jun 9, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🔭 Outside diff range comments (7)
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)

741-843: ⚠️ Potential issue

Streaming support dropped inadvertently
The stream input is declared in inputs but is never added to the parameters dict passed into ChatOpenAI, effectively disabling streaming. Please include:

 parameters = {
     …
+    "stream": self.stream,
 }

so that the stream flag is honored.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

847-901: ⚠️ Potential issue

Critical: Fix update_build_config field mismatch
The update_build_config hook checks for field_name == "base_url", but the actual input is named openai_api_base. As a result, the logic to hide/show temperature and seed will never trigger. Update the condition to use "openai_api_base" (or include both keys).

src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)

328-350: ⚠️ Potential issue

Fix visibility toggling in update_build_config
update_build_config checks for field_name == "base_url", but the input is still named "openai_api_base". As a result, hiding/showing temperature and seed for reasoning models won’t work. Adjust the condition to include "openai_api_base" or rename the input field to "base_url".

src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)

1064-1132: ⚠️ Potential issue

Mismatch between input name and update_build_config field_name checks.
The update_build_config method is looking for "base_url" in field_name, but the component input remains named "openai_api_base". This prevents the UI from correctly hiding/showing the temperature and seed fields.
Change both occurrences of "base_url" to "openai_api_base" in the field_name checks.

src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)

1916-1970: 🛠️ Refactor suggestion

Inconsistent input naming in update_build_config method
You updated the build_model method to emit "base_url" but the visibility toggles in update_build_config still check for "base_url" instead of the actual input name "openai_api_base". As a result, showing/hiding of temperature and seed won’t trigger correctly when the API base changes.

Apply this diff inside the code string for update_build_config:

-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
+        if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
             build_config["temperature"]["show"] = False
             build_config["seed"]["show"] = False

-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
+        if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
             build_config["temperature"]["show"] = True
             build_config["seed"]["show"] = True
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)

972-1040: 🛠️ Refactor suggestion

⚠️ Potential issue

Fix toggling logic in update_build_config
The condition field_name in {"base_url", "model_name", "api_key"} won’t catch changes to the openai_api_base input, so temperature/seed toggles never hide/show correctly. Update this to reference "openai_api_base" (or rename the input to base_url) for consistency.

src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)

2921-2940: 🛠️ Refactor suggestion

⚠️ Potential issue

Restrict toggle to model_name and correct field names in update_build_config.

The existing checks against {"base_url", "model_name", "api_key"} won’t fire for the openai_api_base input (named "openai_api_base" in the JSON) and erroneously respond to api_key changes. The toggle should only happen when the model_name input changes. Please apply this refactor:

-    def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:
-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
-            build_config["temperature"]["show"] = False
-            build_config["seed"]["show"] = False
-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
-            build_config["temperature"]["show"] = True
-            build_config["seed"]["show"] = True
-        return build_config
+    def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:
+        # Only toggle when the model_name input changes
+        if field_name == "model_name":
+            is_reasoning = field_value in OPENAI_REASONING_MODEL_NAMES
+            build_config["temperature"]["show"] = not is_reasoning
+            build_config["seed"]["show"] = not is_reasoning
+        return build_config
♻️ Duplicate comments (4)
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)

1889-2039: The same missing stream parameter and the update_build_config base_url logic apply to this repeated OpenAIModelComponent block.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

2831-2900: [same code block repeated for the second OpenAIModel node; see previous comment]

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)

1288-1288: Duplicate of above change. The same conditional parameter logic is applied here.


1835-1835: Duplicate of above change. The same conditional parameter logic is applied here.

🧹 Nitpick comments (5)
src/backend/base/langflow/components/languagemodels/openai_chat_model.py (1)

111-113: LGTM! Cleaner logic for reasoning model parameter handling.

The refactored approach of conditionally adding temperature and seed only for non-reasoning models is much cleaner than the previous logic of adding them first and then removing them. This makes the code more intuitive and maintainable.

Consider simplifying the temperature assignment:

-            parameters["temperature"] = self.temperature if self.temperature is not None else 0.1
+            parameters["temperature"] = self.temperature or 0.1

However, keep the current logic if self.temperature can be 0 and should be preserved as a valid value.

src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)

783-801: Refine dynamic config toggles
In update_build_config, the condition checks field_name in {"base_url", "model_name", "api_key"} and treats "base_url" like a model selector. Since base_url is an API endpoint (not a model name), this can never match a reasoning model name and adds confusion. I suggest removing "base_url" from the set and only toggle visibility based on model_name.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

847-901: Optional: Propagate the stream input to the model
You define a stream input but never forward it to ChatOpenAI. To honor the user’s streaming toggle, add something like:

 parameters = {
   // existing entries...
+  "streaming": self.stream,
 }
 output = ChatOpenAI(**parameters)
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)

2491-2530: Field name mismatch in update_build_config
The update_build_config method checks for "base_url", but the input field is defined as "openai_api_base". Update the condition to reference "openai_api_base" or remove the redundant check to ensure the visibility toggles for temperature and seed fire correctly.

-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
+        if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
src/backend/tests/unit/components/languagemodels/test_openai_model.py (1)

173-216: Integration tests have mixed mocking approach.

While these tests are marked as integration tests and use the @pytest.mark.api_key_required decorator, they still heavily mock the ChatOpenAI constructor. Consider whether these should be:

  1. True integration tests: Remove mocking and test against actual OpenAI API (requires API key)
  2. Enhanced unit tests: Remove the integration test naming and API key requirement

The current approach is somewhat contradictory - requiring an API key but then mocking the main component being tested.

For true integration testing:

@pytest.mark.api_key_required
def test_build_model_integration_real(self):
    component = OpenAIModelComponent()
    component.api_key = os.getenv("OPENAI_API_KEY")
    component.model_name = "gpt-4.1-nano"
    # ... set other parameters
    
    model = component.build_model()
    assert isinstance(model, ChatOpenAI)
    # Test actual model call if needed
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed809d7 and fca56ec.

📒 Files selected for processing (22)
  • src/backend/base/langflow/base/models/model.py (3 hunks)
  • src/backend/base/langflow/components/languagemodels/openai_chat_model.py (1 hunks)
  • src/backend/base/langflow/graph/graph/base.py (0 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1 hunks)
  • src/backend/base/langflow/schema/message.py (1 hunks)
  • src/backend/tests/unit/components/languagemodels/test_openai_model.py (1 hunks)
💤 Files with no reviewable changes (1)
  • src/backend/base/langflow/graph/graph/base.py
🧰 Additional context used
🪛 Pylint (3.3.7)
src/backend/tests/unit/components/languagemodels/test_openai_model.py

[error] 6-6: No name 'components' in module 'langflow'

(E0611)


[refactor] 149-149: Too few public methods (0/2)

(R0903)

⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: Update Starter Projects
  • GitHub Check: Ruff Style Check (3.13)
  • GitHub Check: Run Ruff Check and Format
🔇 Additional comments (28)
src/backend/base/langflow/components/languagemodels/openai_chat_model.py (1)

105-105: LGTM! Parameter name alignment with ChatOpenAI constructor.

The change from "openai_api_base" to "base_url" correctly aligns with the expected parameter name for the ChatOpenAI constructor.

src/backend/base/langflow/base/models/model.py (2)

5-5: Import addition looks correct.

The addition of AsyncIterator and Iterator imports is appropriate for the expanded type support.


195-195: Type annotation expansion is well-designed.

Expanding the input_value parameter to accept AsyncIterator | Iterator enables support for streaming content, which aligns with the broader iterator support mentioned in the AI summary.

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)

871-871: OpenAI model parameter logic is correctly implemented.

The embedded Python code in the JSON configuration properly implements the conditional parameter inclusion:

  1. Correct parameter renaming: "base_url": self.openai_api_base replaces the old openai_api_base key
  2. Proper conditional logic: Temperature and seed are only added when self.model_name not in OPENAI_REASONING_MODEL_NAMES
  3. Fallback temperature: Uses self.temperature if self.temperature is not None else 0.1 as a sensible default

This ensures reasoning models (like o1) don't receive temperature/seed parameters which they don't support, while maintaining backward compatibility for standard models.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

847-901: Verify correct base URL parameter for ChatOpenAI
The code now uses base_url in the parameters dict—confirm that ChatOpenAI expects a base_url argument and not still openai_api_base. Adjust the key if necessary.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (3)

1370-1370: Use base_url and conditional temperature/seed parameters
Renaming the "openai_api_base" key to "base_url" and wrapping the addition of temperature and seed inside the if self.model_name not in OPENAI_REASONING_MODEL_NAMES block brings this starter template in line with the core OpenAIModelComponent refactor.


1763-1763: Apply core refactor to second template instance
This snippet correctly mirrors the global change: using "base_url" and only setting temperature/seed for non-reasoning models, ensuring consistency across multiple starter flows.


2156-2156: Ensure third template is consistent
The third occurrence now also uses base_url and conditionally includes temperature and seed, matching the updated logic in the shared OpenAIModelComponent.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

2488-2561: Build Model: rename openai_api_basebase_url and conditional temp/seed injection
The updated build_model method now sets "base_url" instead of "openai_api_base" and only adds "temperature" and "seed" when the selected model is not in OPENAI_REASONING_MODEL_NAMES, eliminating the need to pop them later. This simplifies the logic and aligns with the ChatOpenAI constructor’s expected arguments.
Please verify:

  • That ChatOpenAI accepts base_url as the endpoint key (not openai_api_base).
  • That update_build_config still correctly toggles the visibility of "temperature" and "seed" when model_name changes.
src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (2)

233-327: Verify ChatOpenAI parameter rename
The build_model method maps openai_api_base to a new key "base_url". Please confirm that ChatOpenAI (from langchain_openai) accepts base_url instead of openai_api_base; otherwise, instantiation will fail.


233-327: Conditional inclusion of temperature and seed
The inversion now only adds temperature and seed when model_name is not in OPENAI_REASONING_MODEL_NAMES. This aligns with the intended behavior of omitting sampling parameters for reasoning models.

src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)

1064-1105: Confirm ChatOpenAI constructor parameter name.
You’re now passing base_url into ChatOpenAI. Verify that the langchain_openai.ChatOpenAI constructor accepts base_url (not openai_api_base). If it still expects openai_api_base, update the key accordingly.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)

988-1038: Correctly condition parameters for reasoning models and rename base URL key.

The updated build_model now initializes "base_url" instead of "openai_api_base" and only injects temperature and seed for non-reasoning models, streamlining the logic and avoiding unnecessary pops.

src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)

976-1045: Consistent refactor: conditional temperature/seed and base_url rename.

This build_model matches the Basic Prompting template—omitting "temperature" and "seed" by default, then adding them only for non-reasoning models, and using "base_url" instead of "openai_api_base".

src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)

1916-1950: Conditional inclusion of temperature and seed looks correct
The new logic cleanly excludes these params by default and only injects them for non-reasoning models, aligning with the refactoring objective.

src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)

2420-2490: Conditional parameters logic is correct
The build_model method now only injects temperature and seed when the chosen model is not in OPENAI_REASONING_MODEL_NAMES, which aligns with the PR objective and removes the need for manual key removal.


2420-2490: Verify that ChatOpenAI accepts base_url
You renamed the parameter key from openai_api_base to base_url. Confirm that the ChatOpenAI constructor supports base_url; if it expects openai_api_base, revert or alias accordingly to avoid runtime errors.

src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)

1092-1092: LGTM! Improved parameter handling for reasoning models.

The embedded OpenAI model component code correctly implements the new conditional parameter logic. The build_model method now only includes temperature and seed parameters for non-reasoning models, which aligns with OpenAI's API requirements for reasoning models like o1.

Key improvements observed:

  • Clean conditional logic: if self.model_name not in OPENAI_REASONING_MODEL_NAMES:
  • Correct parameter key mapping: "base_url": self.openai_api_base
  • Proper handling of parameter exclusion for reasoning models
src/backend/tests/unit/components/languagemodels/test_openai_model.py (7)

12-34: Well-structured pytest fixtures.

The fixtures provide comprehensive test setup with appropriate default parameters for testing various scenarios. The default_kwargs fixture covers all necessary OpenAI model parameters.


36-54: Comprehensive test for standard model building.

The test correctly verifies that standard models (like gpt-4.1-nano) receive all parameters including temperature and seed. The mocking strategy and parameter assertions are thorough and appropriate.


56-79: Critical test for reasoning model parameter exclusion.

This test is essential for validating the new conditional parameter logic. It correctly verifies that reasoning models (like "o1") exclude temperature and seed parameters, which aligns with OpenAI's API requirements. The explicit parameter checking in lines 77-79 adds valuable verification.


81-94: Proper JSON mode testing.

The test correctly validates JSON mode functionality by verifying that the model's bind method is called with the expected response format {"type": "json_object"}. The mocking strategy appropriately handles the bound instance.


96-118: Good edge case coverage for None/zero value handling.

These tests ensure proper parameter transformation:

  • None API key is correctly passed through
  • Zero max_tokens is appropriately converted to None

This validates that the component handles edge cases correctly before passing parameters to the underlying ChatOpenAI class.


156-171: Important test for dynamic UI configuration.

This test validates the update_build_config method which controls the visibility of temperature and seed parameters in the UI based on the selected model type. This ensures a good user experience by hiding inappropriate parameters for reasoning models.


6-6:

Details

❌ Incorrect review comment

Verify the import path for OpenAIModelComponent.

The static analysis tool flagged that there's no 'components' module in 'langflow'. Please verify the correct import path for the OpenAIModelComponent.


🏁 Script executed:

#!/bin/bash
# Check for the correct import path of OpenAIModelComponent
echo "Searching for OpenAIModelComponent class definition..."
rg -l "class OpenAIModelComponent" --type py

echo -e "\nSearching for openai_chat_model module..."
fd "openai_chat_model.py" --type f

echo -e "\nChecking langflow directory structure..."
fd -t d -d 3 components src/backend/base/langflow/

Length of output: 633


No change needed for the import path

I’ve verified that OpenAIModelComponent is defined in
src/backend/base/langflow/components/languagemodels/openai_chat_model.py
which corresponds exactly to the import

from langflow.components.languagemodels.openai_chat_model import OpenAIModelComponent

The module structure is correct and no update is required.

Likely an incorrect or invalid review comment.

🧰 Tools
🪛 Pylint (3.3.7)

[error] 6-6: No name 'components' in module 'langflow'

(E0611)

src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)

972-1040:

Details

❓ Verification inconclusive

Ensure ChatOpenAI supports the base_url parameter
You’ve replaced the openai_api_base key with base_url in build_model. Confirm that the ChatOpenAI constructor accepts base_url (and not openai_api_base) to prevent runtime errors.

Run this quick check:


🏁 Script executed:

#!/bin/bash
python - << 'EOF'
import inspect
from langchain_openai import ChatOpenAI
print(inspect.signature(ChatOpenAI.__init__))
EOF

Length of output: 139


🏁 Script executed:

#!/bin/bash
# Locate and inspect any local definition of ChatOpenAI
FILES=$(rg -l 'class ChatOpenAI' --glob '*.py')
if [ -z "$FILES" ]; then
  echo "No ChatOpenAI class found in repository"
  exit 0
fi
for f in $FILES; do
    echo "=== Inspecting $f ==="
    awk '/class ChatOpenAI/,/^\s*def /' "$f"
    awk '/def __init__/,/\)/' "$f"
done

Length of output: 164


Verify ChatOpenAI base_url parameter
You’ve replaced the openai_api_base key with base_url in build_model. Please confirm that the external ChatOpenAI constructor accepts a base_url argument (and not openai_api_base) to avoid runtime errors. If it doesn’t, revert to using openai_api_base or update the call accordingly.

src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)

2894-2920: Simplified parameter inclusion logic in build_model looks correct.

The new conditional that only adds "temperature" and "seed" when model_name is not in OPENAI_REASONING_MODEL_NAMES is clearer than populating them unconditionally and then popping keys. It reduces complexity and preserves intended defaults.

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)

935-935: Clean up conditional parameter inclusion in build_model.

Switching to only add temperature and seed when model_name is not in OPENAI_REASONING_MODEL_NAMES clarifies intent and removes the previous add-then-pop pattern. This change simplifies the parameter dict and is correct.

Comment on lines 234 to 240
def consume_iterator(self, iterator: Iterator):
complete_message = ""
for item in iterator:
complete_message += item.content # type: ignore[attr-defined]
return complete_message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve type safety, performance, and async support for the consume_iterator method.

The method has several issues that should be addressed:

  1. Missing async support: The text field supports AsyncIterator, but this method only handles Iterator
  2. Type safety: Assumes items have a content attribute without proper validation
  3. Performance: String concatenation in a loop is inefficient for large iterators
  4. Missing return type annotation
-    def consume_iterator(self, iterator: Iterator):
-        complete_message = ""
-        for item in iterator:
-            complete_message += item.content  # type: ignore[attr-defined]
-        return complete_message
+    def consume_iterator(self, iterator: Iterator) -> str:
+        """Consume an iterator and concatenate content from all items."""
+        content_parts = []
+        for item in iterator:
+            if hasattr(item, 'content'):
+                content_parts.append(str(item.content))
+            else:
+                content_parts.append(str(item))
+        return "".join(content_parts)
+
+    async def consume_async_iterator(self, iterator: AsyncIterator) -> str:
+        """Consume an async iterator and concatenate content from all items."""
+        content_parts = []
+        async for item in iterator:
+            if hasattr(item, 'content'):
+                content_parts.append(str(item.content))
+            else:
+                content_parts.append(str(item))
+        return "".join(content_parts)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def consume_iterator(self, iterator: Iterator):
complete_message = ""
for item in iterator:
complete_message += item.content # type: ignore[attr-defined]
return complete_message
def consume_iterator(self, iterator: Iterator) -> str:
"""Consume an iterator and concatenate content from all items."""
content_parts = []
for item in iterator:
if hasattr(item, 'content'):
content_parts.append(str(item.content))
else:
content_parts.append(str(item))
return "".join(content_parts)
async def consume_async_iterator(self, iterator: AsyncIterator) -> str:
"""Consume an async iterator and concatenate content from all items."""
content_parts = []
async for item in iterator:
if hasattr(item, 'content'):
content_parts.append(str(item.content))
else:
content_parts.append(str(item))
return "".join(content_parts)
🤖 Prompt for AI Agents
In src/backend/base/langflow/schema/message.py around lines 234 to 238, update
the consume_iterator method to support both Iterator and AsyncIterator by making
it async and using async iteration when needed. Add proper type checks to ensure
each item has a content attribute before accessing it. Replace string
concatenation in the loop with collecting contents in a list and joining them at
the end for better performance. Also, add an explicit return type annotation
indicating the method returns a string.

Comment on lines 205 to 206
if not isinstance(input_value.text, str):
input_value.text = input_value.consume_iterator(input_value.text)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify the consume_iterator method exists.

The code assumes input_value.consume_iterator() method exists on the Message class. Please ensure this method is properly implemented and handles both AsyncIterator and Iterator types correctly.


🏁 Script executed:

#!/bin/bash
# Description: Check if consume_iterator method exists in Message class
# Expected: Find the method implementation in the Message class

rg -A 10 "def consume_iterator" --type py

Length of output: 848


Confirm consume_iterator implementation and add AsyncIterator support

The Message class in src/backend/base/langflow/schema/message.py does define:

def consume_iterator(self, iterator: Iterator):
    complete_message = ""
    for item in iterator:
        complete_message += item.content  # type: ignore[attr-defined]
    return complete_message

However, this only handles synchronous Iterator. If input_value.text can be an AsyncIterator, calling this method will fail at runtime. Please update accordingly:

  • Detect AsyncIterator (e.g., via isinstance(iterator, AsyncIterator)) and consume it with async for.
  • You may implement a separate async def consume_iterator_async or merge both into one method that dispatches based on iterator type.

No missing method definitions were found.

🤖 Prompt for AI Agents
In src/backend/base/langflow/base/models/model.py around lines 205 to 206, the
consume_iterator method on the Message class only supports synchronous Iterator
and does not handle AsyncIterator, which can cause runtime failures. Update the
Message class in src/backend/base/langflow/schema/message.py to detect if the
input is an AsyncIterator and consume it using an async for loop. You can either
create a separate async def consume_iterator_async method for AsyncIterator or
merge both sync and async consumption logic into one method that dispatches
based on the iterator type. Ensure the calling code handles awaiting the async
method if needed.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"seed\": self.seed,\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n \"temperature\": self.temperature if self.temperature is not None else 0.1,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name in OPENAI_REASONING_MODEL_NAMES:\n logger.info(\"Getting reasoning model parameters\")\n parameters.pop(\"temperature\")\n parameters.pop(\"seed\")\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Inverted temperature and seed inclusion logic is correct, but update_build_config targets the wrong field name

The new build_model block correctly adds "temperature" and "seed" only when model_name is not in OPENAI_REASONING_MODEL_NAMES. However, the subsequent update_build_config uses "base_url"—which never matches the actual input name (openai_api_base)—so UI toggles for temperature/seed won’t trigger as intended.

Suggested patch inside the embedded code string:

-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
+        if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:

-        if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
+        if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n"
def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:
- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
+ if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
build_config["temperature"]["show"] = False
build_config["seed"]["show"] = False
- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
+ if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
build_config["temperature"]["show"] = True
build_config["seed"]["show"] = True
return build_config
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json at
line 1311, the update_build_config method incorrectly checks for the field name
"base_url" instead of "openai_api_base", which prevents the UI toggles for
temperature and seed from working properly. To fix this, replace all occurrences
of "base_url" with "openai_api_base" in the update_build_config method's
field_name checks so that the toggling logic triggers correctly based on the
actual input field name.

Comment on lines +120 to +154
async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs):
component_class(**default_kwargs)

# Create a mock BadRequestError with a body attribute
mock_error = MagicMock()
mock_error.body = {"message": "test error message"}

# Test the method directly by patching the import
with patch("openai.BadRequestError", mock_error.__class__):
# Manually call isinstance to avoid mocking it
if hasattr(mock_error, "body"):
message = mock_error.body.get("message")
assert message == "test error message"

async def test_get_exception_message_no_openai_import(self, component_class, default_kwargs):
component = component_class(**default_kwargs)

# Test when openai module is not available
with patch.dict("sys.modules", {"openai": None}), patch("builtins.__import__", side_effect=ImportError):
message = component._get_exception_message(Exception("test"))
assert message is None

async def test_get_exception_message_other_exception(self, component_class, default_kwargs):
component = component_class(**default_kwargs)

# Create a regular exception (not BadRequestError)
regular_exception = ValueError("test error")

# Create a simple mock for BadRequestError that the exception won't match
class MockBadRequestError:
pass

with patch("openai.BadRequestError", MockBadRequestError):
message = component._get_exception_message(regular_exception)
assert message is None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Exception handling tests need refinement.

While the exception handling coverage is good, the test implementations could be improved:

  1. Line 120-132: The test doesn't actually call component._get_exception_message() method, making it incomplete.
  2. Line 134-154: The mocking strategy is appropriate for testing import failures and different exception types.

Consider refactoring the first test to actually test the component method:

 async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs):
-    component_class(**default_kwargs)
+    component = component_class(**default_kwargs)
 
     # Create a mock BadRequestError with a body attribute
     mock_error = MagicMock()
     mock_error.body = {"message": "test error message"}
 
-    # Test the method directly by patching the import
-    with patch("openai.BadRequestError", mock_error.__class__):
-        # Manually call isinstance to avoid mocking it
-        if hasattr(mock_error, "body"):
-            message = mock_error.body.get("message")
-            assert message == "test error message"
+    with patch("openai.BadRequestError", type(mock_error)):
+        message = component._get_exception_message(mock_error)
+        assert message == "test error message"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs):
component_class(**default_kwargs)
# Create a mock BadRequestError with a body attribute
mock_error = MagicMock()
mock_error.body = {"message": "test error message"}
# Test the method directly by patching the import
with patch("openai.BadRequestError", mock_error.__class__):
# Manually call isinstance to avoid mocking it
if hasattr(mock_error, "body"):
message = mock_error.body.get("message")
assert message == "test error message"
async def test_get_exception_message_no_openai_import(self, component_class, default_kwargs):
component = component_class(**default_kwargs)
# Test when openai module is not available
with patch.dict("sys.modules", {"openai": None}), patch("builtins.__import__", side_effect=ImportError):
message = component._get_exception_message(Exception("test"))
assert message is None
async def test_get_exception_message_other_exception(self, component_class, default_kwargs):
component = component_class(**default_kwargs)
# Create a regular exception (not BadRequestError)
regular_exception = ValueError("test error")
# Create a simple mock for BadRequestError that the exception won't match
class MockBadRequestError:
pass
with patch("openai.BadRequestError", MockBadRequestError):
message = component._get_exception_message(regular_exception)
assert message is None
async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs):
component = component_class(**default_kwargs)
# Create a mock BadRequestError with a body attribute
mock_error = MagicMock()
mock_error.body = {"message": "test error message"}
with patch("openai.BadRequestError", type(mock_error)):
message = component._get_exception_message(mock_error)
assert message == "test error message"
🧰 Tools
🪛 Pylint (3.3.7)

[refactor] 149-149: Too few public methods (0/2)

(R0903)

🤖 Prompt for AI Agents
In src/backend/tests/unit/components/languagemodels/test_openai_model.py between
lines 120 and 132, the test_get_exception_message_bad_request_error function
does not call the component's _get_exception_message method, so it does not
fully test the intended behavior. Refactor this test to instantiate the
component, create a mock BadRequestError with a body containing a message, patch
openai.BadRequestError with this mock class, and then call
component._get_exception_message with the mock error to assert the returned
message matches the expected error message.

"title_case": false,
"type": "code",
"value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"seed\": self.seed,\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n \"temperature\": self.temperature if self.temperature is not None else 0.1,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name in OPENAI_REASONING_MODEL_NAMES:\n logger.info(\"Getting reasoning model parameters\")\n parameters.pop(\"temperature\")\n parameters.pop(\"seed\")\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n"
"value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Mismatch in field naming for conditional visibility.

The build_model implementation correctly uses the base_url key for OpenAI, but the update_build_config method checks for field_name == "base_url" while the input is still named openai_api_base. This prevents the intended toggling of the temperature and seed inputs.
Action: Align the names—either rename the input to base_url or update the conditional to check "openai_api_base".

🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Instagram
Copywriter.json at line 2913, the update_build_config method checks for the
field_name "base_url" to toggle visibility of temperature and seed inputs, but
the actual input is named "openai_api_base". To fix this, update the conditional
checks in update_build_config to use "openai_api_base" instead of "base_url" so
the visibility toggling works correctly.

@ogabrielluiz ogabrielluiz changed the title refactor(openai): update model parameter handling and add iterable message support feat(message): support sequencing of multiple streamable models Jun 9, 2025
@github-actions github-actions bot added enhancement New feature or request and removed refactor Maintenance tasks and housekeeping labels Jun 9, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jun 9, 2025
Copy link
Collaborator

@edwinjosechittilappilly edwinjosechittilappilly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we focus on the language model component instead of the OpenAI component?

What do you think?

Copy link
Collaborator

@edwinjosechittilappilly edwinjosechittilappilly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we focus on the language model component instead of the OpenAI component?

What do you think?

@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jun 11, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jun 25, 2025
* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jun 25, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jun 25, 2025
@ogabrielluiz ogabrielluiz enabled auto-merge June 25, 2025 16:56
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Jun 25, 2025
@ogabrielluiz ogabrielluiz added this pull request to the merge queue Jun 25, 2025
Merged via the queue into main with commit 633b1e5 Jun 25, 2025
65 checks passed
@ogabrielluiz ogabrielluiz deleted the improve-streaming branch June 25, 2025 17:25
Yukiyukiyeah pushed a commit that referenced this pull request Jun 25, 2025
* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests
2getsandesh pushed a commit to 2getsandesh/langflow-IBM that referenced this pull request Jun 26, 2025
…flow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests
github-merge-queue bot pushed a commit that referenced this pull request Jun 27, 2025
…ity attribute (#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
lucaseduoli pushed a commit that referenced this pull request Jul 1, 2025
…ity attribute (#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
Khurdhula-Harshavardhan pushed a commit to JigsawStack/langflow that referenced this pull request Jul 1, 2025
…flow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests
Khurdhula-Harshavardhan pushed a commit to JigsawStack/langflow that referenced this pull request Jul 1, 2025
…ity attribute (langflow-ai#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (langflow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
dev-thiago-oliver pushed a commit to vvidai/langflow that referenced this pull request Jul 5, 2025
…flow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests
dev-thiago-oliver pushed a commit to vvidai/langflow that referenced this pull request Jul 5, 2025
…ity attribute (langflow-ai#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (langflow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
@dosubot dosubot bot mentioned this pull request Jul 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants