-
Notifications
You must be signed in to change notification settings - Fork 8.2k
feat(message): support sequencing of multiple streamable models #8434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis update refactors how OpenAI model parameters are constructed across multiple components and starter project templates, ensuring Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant OpenAIModelComponent
participant ChatOpenAI
User->>OpenAIModelComponent: build_model()
OpenAIModelComponent->>OpenAIModelComponent: Construct parameters dict
alt model is not reasoning model
OpenAIModelComponent->>OpenAIModelComponent: Add temperature and seed
end
OpenAIModelComponent->>ChatOpenAI: Instantiate with parameters
ChatOpenAI-->>OpenAIModelComponent: Model instance
OpenAIModelComponent-->>User: Return model
sequenceDiagram
participant External
participant LCModelComponent
participant Message
External->>LCModelComponent: _get_chat_result(input_value)
alt input_value is Message and input_value.text is iterator
LCModelComponent->>Message: consume_iterator(input_value.text)
Message-->>LCModelComponent: Concatenated string
LCModelComponent->>LCModelComponent: Replace input_value.text with string
end
LCModelComponent-->>External: Chat result
Suggested labels
✨ Finishing Touches🧪 Generate Unit Tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🔭 Outside diff range comments (7)
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
741-843:⚠️ Potential issueStreaming support dropped inadvertently
Thestreaminput is declared ininputsbut is never added to theparametersdict passed intoChatOpenAI, effectively disabling streaming. Please include:parameters = { … + "stream": self.stream, }so that the
streamflag is honored.src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
847-901:⚠️ Potential issueCritical: Fix
update_build_configfield mismatch
Theupdate_build_confighook checks forfield_name == "base_url", but the actual input is namedopenai_api_base. As a result, the logic to hide/showtemperatureandseedwill never trigger. Update the condition to use"openai_api_base"(or include both keys).src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (1)
328-350:⚠️ Potential issueFix visibility toggling in update_build_config
update_build_configchecks forfield_name == "base_url", but the input is still named"openai_api_base". As a result, hiding/showing temperature and seed for reasoning models won’t work. Adjust the condition to include"openai_api_base"or rename the input field to"base_url".src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)
1064-1132:⚠️ Potential issueMismatch between input name and update_build_config field_name checks.
Theupdate_build_configmethod is looking for"base_url"infield_name, but the component input remains named"openai_api_base". This prevents the UI from correctly hiding/showing thetemperatureandseedfields.
Change both occurrences of"base_url"to"openai_api_base"in thefield_namechecks.src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)
1916-1970: 🛠️ Refactor suggestionInconsistent input naming in update_build_config method
You updated thebuild_modelmethod to emit"base_url"but the visibility toggles inupdate_build_configstill check for"base_url"instead of the actual input name"openai_api_base". As a result, showing/hiding oftemperatureandseedwon’t trigger correctly when the API base changes.Apply this diff inside the code string for
update_build_config:- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: + if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: build_config["temperature"]["show"] = False build_config["seed"]["show"] = False - if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES: + if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES: build_config["temperature"]["show"] = True build_config["seed"]["show"] = Truesrc/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)
972-1040: 🛠️ Refactor suggestion
⚠️ Potential issueFix toggling logic in
update_build_config
The conditionfield_name in {"base_url", "model_name", "api_key"}won’t catch changes to theopenai_api_baseinput, so temperature/seed toggles never hide/show correctly. Update this to reference"openai_api_base"(or rename the input tobase_url) for consistency.src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
2921-2940: 🛠️ Refactor suggestion
⚠️ Potential issueRestrict toggle to
model_nameand correct field names inupdate_build_config.The existing checks against
{"base_url", "model_name", "api_key"}won’t fire for theopenai_api_baseinput (named"openai_api_base"in the JSON) and erroneously respond toapi_keychanges. The toggle should only happen when the model_name input changes. Please apply this refactor:- def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict: - if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: - build_config["temperature"]["show"] = False - build_config["seed"]["show"] = False - if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES: - build_config["temperature"]["show"] = True - build_config["seed"]["show"] = True - return build_config + def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict: + # Only toggle when the model_name input changes + if field_name == "model_name": + is_reasoning = field_value in OPENAI_REASONING_MODEL_NAMES + build_config["temperature"]["show"] = not is_reasoning + build_config["seed"]["show"] = not is_reasoning + return build_config
♻️ Duplicate comments (4)
src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
1889-2039: The same missingstreamparameter and theupdate_build_configbase_url logic apply to this repeatedOpenAIModelComponentblock.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2831-2900: [same code block repeated for the second OpenAIModel node; see previous comment]src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (2)
1288-1288: Duplicate of above change. The same conditional parameter logic is applied here.
1835-1835: Duplicate of above change. The same conditional parameter logic is applied here.
🧹 Nitpick comments (5)
src/backend/base/langflow/components/languagemodels/openai_chat_model.py (1)
111-113: LGTM! Cleaner logic for reasoning model parameter handling.The refactored approach of conditionally adding
temperatureandseedonly for non-reasoning models is much cleaner than the previous logic of adding them first and then removing them. This makes the code more intuitive and maintainable.Consider simplifying the temperature assignment:
- parameters["temperature"] = self.temperature if self.temperature is not None else 0.1 + parameters["temperature"] = self.temperature or 0.1However, keep the current logic if
self.temperaturecan be0and should be preserved as a valid value.src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (1)
783-801: Refine dynamic config toggles
Inupdate_build_config, the condition checksfield_name in {"base_url", "model_name", "api_key"}and treats"base_url"like a model selector. Sincebase_urlis an API endpoint (not a model name), this can never match a reasoning model name and adds confusion. I suggest removing"base_url"from the set and only toggle visibility based onmodel_name.src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
847-901: Optional: Propagate thestreaminput to the model
You define astreaminput but never forward it toChatOpenAI. To honor the user’s streaming toggle, add something like:parameters = { // existing entries... + "streaming": self.stream, } output = ChatOpenAI(**parameters)src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)
2491-2530: Field name mismatch inupdate_build_config
Theupdate_build_configmethod checks for"base_url", but the input field is defined as"openai_api_base". Update the condition to reference"openai_api_base"or remove the redundant check to ensure the visibility toggles fortemperatureandseedfire correctly.- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: + if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:src/backend/tests/unit/components/languagemodels/test_openai_model.py (1)
173-216: Integration tests have mixed mocking approach.While these tests are marked as integration tests and use the
@pytest.mark.api_key_requireddecorator, they still heavily mock theChatOpenAIconstructor. Consider whether these should be:
- True integration tests: Remove mocking and test against actual OpenAI API (requires API key)
- Enhanced unit tests: Remove the integration test naming and API key requirement
The current approach is somewhat contradictory - requiring an API key but then mocking the main component being tested.
For true integration testing:
@pytest.mark.api_key_required def test_build_model_integration_real(self): component = OpenAIModelComponent() component.api_key = os.getenv("OPENAI_API_KEY") component.model_name = "gpt-4.1-nano" # ... set other parameters model = component.build_model() assert isinstance(model, ChatOpenAI) # Test actual model call if needed
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
src/backend/base/langflow/base/models/model.py(3 hunks)src/backend/base/langflow/components/languagemodels/openai_chat_model.py(1 hunks)src/backend/base/langflow/graph/graph/base.py(0 hunks)src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Market Research.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json(1 hunks)src/backend/base/langflow/schema/message.py(1 hunks)src/backend/tests/unit/components/languagemodels/test_openai_model.py(1 hunks)
💤 Files with no reviewable changes (1)
- src/backend/base/langflow/graph/graph/base.py
🧰 Additional context used
🪛 Pylint (3.3.7)
src/backend/tests/unit/components/languagemodels/test_openai_model.py
[error] 6-6: No name 'components' in module 'langflow'
(E0611)
[refactor] 149-149: Too few public methods (0/2)
(R0903)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Optimize new Python code in this PR
- GitHub Check: Update Starter Projects
- GitHub Check: Ruff Style Check (3.13)
- GitHub Check: Run Ruff Check and Format
🔇 Additional comments (28)
src/backend/base/langflow/components/languagemodels/openai_chat_model.py (1)
105-105: LGTM! Parameter name alignment with ChatOpenAI constructor.The change from
"openai_api_base"to"base_url"correctly aligns with the expected parameter name for theChatOpenAIconstructor.src/backend/base/langflow/base/models/model.py (2)
5-5: Import addition looks correct.The addition of
AsyncIteratorandIteratorimports is appropriate for the expanded type support.
195-195: Type annotation expansion is well-designed.Expanding the
input_valueparameter to acceptAsyncIterator | Iteratorenables support for streaming content, which aligns with the broader iterator support mentioned in the AI summary.src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)
871-871: OpenAI model parameter logic is correctly implemented.The embedded Python code in the JSON configuration properly implements the conditional parameter inclusion:
- Correct parameter renaming:
"base_url": self.openai_api_basereplaces the oldopenai_api_basekey- Proper conditional logic: Temperature and seed are only added when
self.model_name not in OPENAI_REASONING_MODEL_NAMES- Fallback temperature: Uses
self.temperature if self.temperature is not None else 0.1as a sensible defaultThis ensures reasoning models (like o1) don't receive temperature/seed parameters which they don't support, while maintaining backward compatibility for standard models.
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)
847-901: Verify correct base URL parameter forChatOpenAI
The code now usesbase_urlin the parameters dict—confirm thatChatOpenAIexpects abase_urlargument and not stillopenai_api_base. Adjust the key if necessary.src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (3)
1370-1370: Usebase_urland conditionaltemperature/seedparameters
Renaming the"openai_api_base"key to"base_url"and wrapping the addition oftemperatureandseedinside theif self.model_name not in OPENAI_REASONING_MODEL_NAMESblock brings this starter template in line with the coreOpenAIModelComponentrefactor.
1763-1763: Apply core refactor to second template instance
This snippet correctly mirrors the global change: using"base_url"and only settingtemperature/seedfor non-reasoning models, ensuring consistency across multiple starter flows.
2156-2156: Ensure third template is consistent
The third occurrence now also usesbase_urland conditionally includestemperatureandseed, matching the updated logic in the sharedOpenAIModelComponent.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
2488-2561: Build Model: renameopenai_api_base→base_urland conditional temp/seed injection
The updatedbuild_modelmethod now sets"base_url"instead of"openai_api_base"and only adds"temperature"and"seed"when the selected model is not inOPENAI_REASONING_MODEL_NAMES, eliminating the need topopthem later. This simplifies the logic and aligns with theChatOpenAIconstructor’s expected arguments.
Please verify:
- That
ChatOpenAIacceptsbase_urlas the endpoint key (notopenai_api_base).- That
update_build_configstill correctly toggles the visibility of"temperature"and"seed"whenmodel_namechanges.src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (2)
233-327: Verify ChatOpenAI parameter rename
Thebuild_modelmethod mapsopenai_api_baseto a new key"base_url". Please confirm thatChatOpenAI(fromlangchain_openai) acceptsbase_urlinstead ofopenai_api_base; otherwise, instantiation will fail.
233-327: Conditional inclusion oftemperatureandseed
The inversion now only addstemperatureandseedwhenmodel_nameis not inOPENAI_REASONING_MODEL_NAMES. This aligns with the intended behavior of omitting sampling parameters for reasoning models.src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (1)
1064-1105: Confirm ChatOpenAI constructor parameter name.
You’re now passingbase_urlintoChatOpenAI. Verify that thelangchain_openai.ChatOpenAIconstructor acceptsbase_url(notopenai_api_base). If it still expectsopenai_api_base, update the key accordingly.src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (1)
988-1038: Correctly condition parameters for reasoning models and rename base URL key.The updated
build_modelnow initializes"base_url"instead of"openai_api_base"and only injectstemperatureandseedfor non-reasoning models, streamlining the logic and avoiding unnecessary pops.src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (1)
976-1045: Consistent refactor: conditional temperature/seed and base_url rename.This
build_modelmatches the Basic Prompting template—omitting"temperature"and"seed"by default, then adding them only for non-reasoning models, and using"base_url"instead of"openai_api_base".src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)
1916-1950: Conditional inclusion oftemperatureandseedlooks correct
The new logic cleanly excludes these params by default and only injects them for non-reasoning models, aligning with the refactoring objective.src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)
2420-2490: Conditional parameters logic is correct
Thebuild_modelmethod now only injectstemperatureandseedwhen the chosen model is not inOPENAI_REASONING_MODEL_NAMES, which aligns with the PR objective and removes the need for manual key removal.
2420-2490: Verify that ChatOpenAI acceptsbase_url
You renamed the parameter key fromopenai_api_basetobase_url. Confirm that theChatOpenAIconstructor supportsbase_url; if it expectsopenai_api_base, revert or alias accordingly to avoid runtime errors.src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (1)
1092-1092: LGTM! Improved parameter handling for reasoning models.The embedded OpenAI model component code correctly implements the new conditional parameter logic. The
build_modelmethod now only includestemperatureandseedparameters for non-reasoning models, which aligns with OpenAI's API requirements for reasoning models like o1.Key improvements observed:
- Clean conditional logic:
if self.model_name not in OPENAI_REASONING_MODEL_NAMES:- Correct parameter key mapping:
"base_url": self.openai_api_base- Proper handling of parameter exclusion for reasoning models
src/backend/tests/unit/components/languagemodels/test_openai_model.py (7)
12-34: Well-structured pytest fixtures.The fixtures provide comprehensive test setup with appropriate default parameters for testing various scenarios. The default_kwargs fixture covers all necessary OpenAI model parameters.
36-54: Comprehensive test for standard model building.The test correctly verifies that standard models (like gpt-4.1-nano) receive all parameters including
temperatureandseed. The mocking strategy and parameter assertions are thorough and appropriate.
56-79: Critical test for reasoning model parameter exclusion.This test is essential for validating the new conditional parameter logic. It correctly verifies that reasoning models (like "o1") exclude
temperatureandseedparameters, which aligns with OpenAI's API requirements. The explicit parameter checking in lines 77-79 adds valuable verification.
81-94: Proper JSON mode testing.The test correctly validates JSON mode functionality by verifying that the model's
bindmethod is called with the expected response format{"type": "json_object"}. The mocking strategy appropriately handles the bound instance.
96-118: Good edge case coverage for None/zero value handling.These tests ensure proper parameter transformation:
NoneAPI key is correctly passed through- Zero
max_tokensis appropriately converted toNoneThis validates that the component handles edge cases correctly before passing parameters to the underlying ChatOpenAI class.
156-171: Important test for dynamic UI configuration.This test validates the
update_build_configmethod which controls the visibility oftemperatureandseedparameters in the UI based on the selected model type. This ensures a good user experience by hiding inappropriate parameters for reasoning models.
6-6:Details
❌ Incorrect review comment
Verify the import path for OpenAIModelComponent.
The static analysis tool flagged that there's no 'components' module in 'langflow'. Please verify the correct import path for the OpenAIModelComponent.
🏁 Script executed:
#!/bin/bash # Check for the correct import path of OpenAIModelComponent echo "Searching for OpenAIModelComponent class definition..." rg -l "class OpenAIModelComponent" --type py echo -e "\nSearching for openai_chat_model module..." fd "openai_chat_model.py" --type f echo -e "\nChecking langflow directory structure..." fd -t d -d 3 components src/backend/base/langflow/Length of output: 633
No change needed for the import path
I’ve verified that
OpenAIModelComponentis defined in
src/backend/base/langflow/components/languagemodels/openai_chat_model.py
which corresponds exactly to the importfrom langflow.components.languagemodels.openai_chat_model import OpenAIModelComponentThe module structure is correct and no update is required.
Likely an incorrect or invalid review comment.
🧰 Tools
🪛 Pylint (3.3.7)
[error] 6-6: No name 'components' in module 'langflow'
(E0611)
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)
972-1040:Details
❓ Verification inconclusive
Ensure
ChatOpenAIsupports thebase_urlparameter
You’ve replaced theopenai_api_basekey withbase_urlinbuild_model. Confirm that theChatOpenAIconstructor acceptsbase_url(and notopenai_api_base) to prevent runtime errors.Run this quick check:
🏁 Script executed:
#!/bin/bash python - << 'EOF' import inspect from langchain_openai import ChatOpenAI print(inspect.signature(ChatOpenAI.__init__)) EOFLength of output: 139
🏁 Script executed:
#!/bin/bash # Locate and inspect any local definition of ChatOpenAI FILES=$(rg -l 'class ChatOpenAI' --glob '*.py') if [ -z "$FILES" ]; then echo "No ChatOpenAI class found in repository" exit 0 fi for f in $FILES; do echo "=== Inspecting $f ===" awk '/class ChatOpenAI/,/^\s*def /' "$f" awk '/def __init__/,/\)/' "$f" doneLength of output: 164
Verify ChatOpenAI
base_urlparameter
You’ve replaced theopenai_api_basekey withbase_urlinbuild_model. Please confirm that the externalChatOpenAIconstructor accepts abase_urlargument (and notopenai_api_base) to avoid runtime errors. If it doesn’t, revert to usingopenai_api_baseor update the call accordingly.src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)
2894-2920: Simplified parameter inclusion logic inbuild_modellooks correct.The new conditional that only adds
"temperature"and"seed"whenmodel_nameis not inOPENAI_REASONING_MODEL_NAMESis clearer than populating them unconditionally and then popping keys. It reduces complexity and preserves intended defaults.src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)
935-935: Clean up conditional parameter inclusion inbuild_model.Switching to only add
temperatureandseedwhenmodel_nameis not inOPENAI_REASONING_MODEL_NAMESclarifies intent and removes the previous add-then-pop pattern. This change simplifies the parameter dict and is correct.
| def consume_iterator(self, iterator: Iterator): | ||
| complete_message = "" | ||
| for item in iterator: | ||
| complete_message += item.content # type: ignore[attr-defined] | ||
| return complete_message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve type safety, performance, and async support for the consume_iterator method.
The method has several issues that should be addressed:
- Missing async support: The
textfield supportsAsyncIterator, but this method only handlesIterator - Type safety: Assumes items have a
contentattribute without proper validation - Performance: String concatenation in a loop is inefficient for large iterators
- Missing return type annotation
- def consume_iterator(self, iterator: Iterator):
- complete_message = ""
- for item in iterator:
- complete_message += item.content # type: ignore[attr-defined]
- return complete_message
+ def consume_iterator(self, iterator: Iterator) -> str:
+ """Consume an iterator and concatenate content from all items."""
+ content_parts = []
+ for item in iterator:
+ if hasattr(item, 'content'):
+ content_parts.append(str(item.content))
+ else:
+ content_parts.append(str(item))
+ return "".join(content_parts)
+
+ async def consume_async_iterator(self, iterator: AsyncIterator) -> str:
+ """Consume an async iterator and concatenate content from all items."""
+ content_parts = []
+ async for item in iterator:
+ if hasattr(item, 'content'):
+ content_parts.append(str(item.content))
+ else:
+ content_parts.append(str(item))
+ return "".join(content_parts)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def consume_iterator(self, iterator: Iterator): | |
| complete_message = "" | |
| for item in iterator: | |
| complete_message += item.content # type: ignore[attr-defined] | |
| return complete_message | |
| def consume_iterator(self, iterator: Iterator) -> str: | |
| """Consume an iterator and concatenate content from all items.""" | |
| content_parts = [] | |
| for item in iterator: | |
| if hasattr(item, 'content'): | |
| content_parts.append(str(item.content)) | |
| else: | |
| content_parts.append(str(item)) | |
| return "".join(content_parts) | |
| async def consume_async_iterator(self, iterator: AsyncIterator) -> str: | |
| """Consume an async iterator and concatenate content from all items.""" | |
| content_parts = [] | |
| async for item in iterator: | |
| if hasattr(item, 'content'): | |
| content_parts.append(str(item.content)) | |
| else: | |
| content_parts.append(str(item)) | |
| return "".join(content_parts) |
🤖 Prompt for AI Agents
In src/backend/base/langflow/schema/message.py around lines 234 to 238, update
the consume_iterator method to support both Iterator and AsyncIterator by making
it async and using async iteration when needed. Add proper type checks to ensure
each item has a content attribute before accessing it. Replace string
concatenation in the loop with collecting contents in a list and joining them at
the end for better performance. Also, add an explicit return type annotation
indicating the method returns a string.
| if not isinstance(input_value.text, str): | ||
| input_value.text = input_value.consume_iterator(input_value.text) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify the consume_iterator method exists.
The code assumes input_value.consume_iterator() method exists on the Message class. Please ensure this method is properly implemented and handles both AsyncIterator and Iterator types correctly.
🏁 Script executed:
#!/bin/bash
# Description: Check if consume_iterator method exists in Message class
# Expected: Find the method implementation in the Message class
rg -A 10 "def consume_iterator" --type pyLength of output: 848
Confirm consume_iterator implementation and add AsyncIterator support
The Message class in src/backend/base/langflow/schema/message.py does define:
def consume_iterator(self, iterator: Iterator):
complete_message = ""
for item in iterator:
complete_message += item.content # type: ignore[attr-defined]
return complete_messageHowever, this only handles synchronous Iterator. If input_value.text can be an AsyncIterator, calling this method will fail at runtime. Please update accordingly:
- Detect
AsyncIterator(e.g., viaisinstance(iterator, AsyncIterator)) and consume it withasync for. - You may implement a separate
async def consume_iterator_asyncor merge both into one method that dispatches based on iterator type.
No missing method definitions were found.
🤖 Prompt for AI Agents
In src/backend/base/langflow/base/models/model.py around lines 205 to 206, the
consume_iterator method on the Message class only supports synchronous Iterator
and does not handle AsyncIterator, which can cause runtime failures. Update the
Message class in src/backend/base/langflow/schema/message.py to detect if the
input is an AsyncIterator and consume it using an async for loop. You can either
create a separate async def consume_iterator_async method for AsyncIterator or
merge both sync and async consumption logic into one method that dispatches
based on the iterator type. Ensure the calling code handles awaiting the async
method if needed.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"seed\": self.seed,\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n \"temperature\": self.temperature if self.temperature is not None else 0.1,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name in OPENAI_REASONING_MODEL_NAMES:\n logger.info(\"Getting reasoning model parameters\")\n parameters.pop(\"temperature\")\n parameters.pop(\"seed\")\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inverted temperature and seed inclusion logic is correct, but update_build_config targets the wrong field name
The new build_model block correctly adds "temperature" and "seed" only when model_name is not in OPENAI_REASONING_MODEL_NAMES. However, the subsequent update_build_config uses "base_url"—which never matches the actual input name (openai_api_base)—so UI toggles for temperature/seed won’t trigger as intended.
Suggested patch inside the embedded code string:
- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
+ if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES:
- if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:
+ if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n" | |
| def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict: | |
| - if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: | |
| + if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_REASONING_MODEL_NAMES: | |
| build_config["temperature"]["show"] = False | |
| build_config["seed"]["show"] = False | |
| - if field_name in {"base_url", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES: | |
| + if field_name in {"openai_api_base", "model_name", "api_key"} and field_value in OPENAI_MODEL_NAMES: | |
| build_config["temperature"]["show"] = True | |
| build_config["seed"]["show"] = True | |
| return build_config |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json at
line 1311, the update_build_config method incorrectly checks for the field name
"base_url" instead of "openai_api_base", which prevents the UI toggles for
temperature and seed from working properly. To fix this, replace all occurrences
of "base_url" with "openai_api_base" in the update_build_config method's
field_name checks so that the toggling logic triggers correctly based on the
actual input field name.
| async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs): | ||
| component_class(**default_kwargs) | ||
|
|
||
| # Create a mock BadRequestError with a body attribute | ||
| mock_error = MagicMock() | ||
| mock_error.body = {"message": "test error message"} | ||
|
|
||
| # Test the method directly by patching the import | ||
| with patch("openai.BadRequestError", mock_error.__class__): | ||
| # Manually call isinstance to avoid mocking it | ||
| if hasattr(mock_error, "body"): | ||
| message = mock_error.body.get("message") | ||
| assert message == "test error message" | ||
|
|
||
| async def test_get_exception_message_no_openai_import(self, component_class, default_kwargs): | ||
| component = component_class(**default_kwargs) | ||
|
|
||
| # Test when openai module is not available | ||
| with patch.dict("sys.modules", {"openai": None}), patch("builtins.__import__", side_effect=ImportError): | ||
| message = component._get_exception_message(Exception("test")) | ||
| assert message is None | ||
|
|
||
| async def test_get_exception_message_other_exception(self, component_class, default_kwargs): | ||
| component = component_class(**default_kwargs) | ||
|
|
||
| # Create a regular exception (not BadRequestError) | ||
| regular_exception = ValueError("test error") | ||
|
|
||
| # Create a simple mock for BadRequestError that the exception won't match | ||
| class MockBadRequestError: | ||
| pass | ||
|
|
||
| with patch("openai.BadRequestError", MockBadRequestError): | ||
| message = component._get_exception_message(regular_exception) | ||
| assert message is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Exception handling tests need refinement.
While the exception handling coverage is good, the test implementations could be improved:
- Line 120-132: The test doesn't actually call
component._get_exception_message()method, making it incomplete. - Line 134-154: The mocking strategy is appropriate for testing import failures and different exception types.
Consider refactoring the first test to actually test the component method:
async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs):
- component_class(**default_kwargs)
+ component = component_class(**default_kwargs)
# Create a mock BadRequestError with a body attribute
mock_error = MagicMock()
mock_error.body = {"message": "test error message"}
- # Test the method directly by patching the import
- with patch("openai.BadRequestError", mock_error.__class__):
- # Manually call isinstance to avoid mocking it
- if hasattr(mock_error, "body"):
- message = mock_error.body.get("message")
- assert message == "test error message"
+ with patch("openai.BadRequestError", type(mock_error)):
+ message = component._get_exception_message(mock_error)
+ assert message == "test error message"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs): | |
| component_class(**default_kwargs) | |
| # Create a mock BadRequestError with a body attribute | |
| mock_error = MagicMock() | |
| mock_error.body = {"message": "test error message"} | |
| # Test the method directly by patching the import | |
| with patch("openai.BadRequestError", mock_error.__class__): | |
| # Manually call isinstance to avoid mocking it | |
| if hasattr(mock_error, "body"): | |
| message = mock_error.body.get("message") | |
| assert message == "test error message" | |
| async def test_get_exception_message_no_openai_import(self, component_class, default_kwargs): | |
| component = component_class(**default_kwargs) | |
| # Test when openai module is not available | |
| with patch.dict("sys.modules", {"openai": None}), patch("builtins.__import__", side_effect=ImportError): | |
| message = component._get_exception_message(Exception("test")) | |
| assert message is None | |
| async def test_get_exception_message_other_exception(self, component_class, default_kwargs): | |
| component = component_class(**default_kwargs) | |
| # Create a regular exception (not BadRequestError) | |
| regular_exception = ValueError("test error") | |
| # Create a simple mock for BadRequestError that the exception won't match | |
| class MockBadRequestError: | |
| pass | |
| with patch("openai.BadRequestError", MockBadRequestError): | |
| message = component._get_exception_message(regular_exception) | |
| assert message is None | |
| async def test_get_exception_message_bad_request_error(self, component_class, default_kwargs): | |
| component = component_class(**default_kwargs) | |
| # Create a mock BadRequestError with a body attribute | |
| mock_error = MagicMock() | |
| mock_error.body = {"message": "test error message"} | |
| with patch("openai.BadRequestError", type(mock_error)): | |
| message = component._get_exception_message(mock_error) | |
| assert message == "test error message" |
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 149-149: Too few public methods (0/2)
(R0903)
🤖 Prompt for AI Agents
In src/backend/tests/unit/components/languagemodels/test_openai_model.py between
lines 120 and 132, the test_get_exception_message_bad_request_error function
does not call the component's _get_exception_message method, so it does not
fully test the intended behavior. Refactor this test to instantiate the
component, create a mock BadRequestError with a body containing a message, patch
openai.BadRequestError with this mock class, and then call
component._get_exception_message with the mock error to assert the returned
message matches the expected error message.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"seed\": self.seed,\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n \"temperature\": self.temperature if self.temperature is not None else 0.1,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name in OPENAI_REASONING_MODEL_NAMES:\n logger.info(\"Getting reasoning model parameters\")\n parameters.pop(\"temperature\")\n parameters.pop(\"seed\")\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n" | ||
| "value": "from typing import Any\n\nfrom langchain_openai import ChatOpenAI\nfrom pydantic.v1 import SecretStr\n\nfrom langflow.base.models.model import LCModelComponent\nfrom langflow.base.models.openai_constants import (\n OPENAI_MODEL_NAMES,\n OPENAI_REASONING_MODEL_NAMES,\n)\nfrom langflow.field_typing import LanguageModel\nfrom langflow.field_typing.range_spec import RangeSpec\nfrom langflow.inputs import BoolInput, DictInput, DropdownInput, IntInput, SecretStrInput, SliderInput, StrInput\nfrom langflow.logging import logger\n\n\nclass OpenAIModelComponent(LCModelComponent):\n display_name = \"OpenAI\"\n description = \"Generates text using OpenAI LLMs.\"\n icon = \"OpenAI\"\n name = \"OpenAIModel\"\n\n inputs = [\n *LCModelComponent._base_inputs,\n IntInput(\n name=\"max_tokens\",\n display_name=\"Max Tokens\",\n advanced=True,\n info=\"The maximum number of tokens to generate. Set to 0 for unlimited tokens.\",\n range_spec=RangeSpec(min=0, max=128000),\n ),\n DictInput(\n name=\"model_kwargs\",\n display_name=\"Model Kwargs\",\n advanced=True,\n info=\"Additional keyword arguments to pass to the model.\",\n ),\n BoolInput(\n name=\"json_mode\",\n display_name=\"JSON Mode\",\n advanced=True,\n info=\"If True, it will output JSON regardless of passing a schema.\",\n ),\n DropdownInput(\n name=\"model_name\",\n display_name=\"Model Name\",\n advanced=False,\n options=OPENAI_MODEL_NAMES + OPENAI_REASONING_MODEL_NAMES,\n value=OPENAI_MODEL_NAMES[1],\n combobox=True,\n real_time_refresh=True,\n ),\n StrInput(\n name=\"openai_api_base\",\n display_name=\"OpenAI API Base\",\n advanced=True,\n info=\"The base URL of the OpenAI API. \"\n \"Defaults to https://api.openai.com/v1. \"\n \"You can change this to use other APIs like JinaChat, LocalAI and Prem.\",\n ),\n SecretStrInput(\n name=\"api_key\",\n display_name=\"OpenAI API Key\",\n info=\"The OpenAI API Key to use for the OpenAI model.\",\n advanced=False,\n value=\"OPENAI_API_KEY\",\n required=True,\n ),\n SliderInput(\n name=\"temperature\",\n display_name=\"Temperature\",\n value=0.1,\n range_spec=RangeSpec(min=0, max=1, step=0.01),\n show=True,\n ),\n IntInput(\n name=\"seed\",\n display_name=\"Seed\",\n info=\"The seed controls the reproducibility of the job.\",\n advanced=True,\n value=1,\n ),\n IntInput(\n name=\"max_retries\",\n display_name=\"Max Retries\",\n info=\"The maximum number of retries to make when generating.\",\n advanced=True,\n value=5,\n ),\n IntInput(\n name=\"timeout\",\n display_name=\"Timeout\",\n info=\"The timeout for requests to OpenAI completion API.\",\n advanced=True,\n value=700,\n ),\n ]\n\n def build_model(self) -> LanguageModel: # type: ignore[type-var]\n parameters = {\n \"api_key\": SecretStr(self.api_key).get_secret_value() if self.api_key else None,\n \"model_name\": self.model_name,\n \"max_tokens\": self.max_tokens or None,\n \"model_kwargs\": self.model_kwargs or {},\n \"base_url\": self.openai_api_base or \"https://api.openai.com/v1\",\n \"max_retries\": self.max_retries,\n \"timeout\": self.timeout,\n }\n\n logger.info(f\"Model name: {self.model_name}\")\n if self.model_name not in OPENAI_REASONING_MODEL_NAMES:\n parameters[\"temperature\"] = self.temperature if self.temperature is not None else 0.1\n parameters[\"seed\"] = self.seed\n\n output = ChatOpenAI(**parameters)\n if self.json_mode:\n output = output.bind(response_format={\"type\": \"json_object\"})\n\n return output\n\n def _get_exception_message(self, e: Exception):\n \"\"\"Get a message from an OpenAI exception.\n\n Args:\n e (Exception): The exception to get the message from.\n\n Returns:\n str: The message from the exception.\n \"\"\"\n try:\n from openai import BadRequestError\n except ImportError:\n return None\n if isinstance(e, BadRequestError):\n message = e.body.get(\"message\")\n if message:\n return message\n return None\n\n def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None) -> dict:\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_REASONING_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = False\n build_config[\"seed\"][\"show\"] = False\n if field_name in {\"base_url\", \"model_name\", \"api_key\"} and field_value in OPENAI_MODEL_NAMES:\n build_config[\"temperature\"][\"show\"] = True\n build_config[\"seed\"][\"show\"] = True\n return build_config\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mismatch in field naming for conditional visibility.
The build_model implementation correctly uses the base_url key for OpenAI, but the update_build_config method checks for field_name == "base_url" while the input is still named openai_api_base. This prevents the intended toggling of the temperature and seed inputs.
Action: Align the names—either rename the input to base_url or update the conditional to check "openai_api_base".
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Instagram
Copywriter.json at line 2913, the update_build_config method checks for the
field_name "base_url" to toggle visibility of temperature and seed inputs, but
the actual input is named "openai_api_base". To fix this, update the conditional
checks in update_build_config to use "openai_api_base" instead of "base_url" so
the visibility toggling works correctly.
edwinjosechittilappilly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we focus on the language model component instead of the OpenAI component?
What do you think?
edwinjosechittilappilly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we focus on the language model component instead of the OpenAI component?
What do you think?
fca56ec to
df1f9a9
Compare
…rator and Iterator
…ameters in build_model method
…update its implementation for handling text
…mproved message handling
…ge handling and improve chat output integration
* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component.
* feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests
…flow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests
…ity attribute (#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…ity attribute (#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…flow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests
…ity attribute (langflow-ai#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (langflow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…flow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests
…ity attribute (langflow-ai#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (langflow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests