-
Notifications
You must be signed in to change notification settings - Fork 8.2k
refactor(agent): standardize memory handling and update chat history logic #8715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis update refactors the agent and memory handling logic across the backend, agent components, and multiple starter project configurations. It removes dynamic memory input fields from agent components, standardizes memory retrieval using session IDs, updates chat history handling, and introduces explicit message count control. Debug logging for chat history is also added. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant AgentComponent
participant MemoryComponent
User->>AgentComponent: Submit input
AgentComponent->>MemoryComponent: retrieve_messages(session_id, n_messages)
MemoryComponent-->>AgentComponent: Return chat history
AgentComponent->>AgentComponent: Log/print chat history
AgentComponent->>AgentComponent: Run agent logic with chat history
AgentComponent-->>User: Return response
Suggested labels
Suggested reviewers
✨ Finishing Touches🧪 Generate Unit Tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🔭 Outside diff range comments (14)
src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (1)
870-888:memory_inputsis now dead-code – remove it to avoid confusion
memory_inputsis still being built but never used after you commented out*memory_inputsfrom theinputslist.
Keeping an unused large list (it clones everyMemoryComponentinput) increases memory footprint and misleads future readers into thinking the feature is still active.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs] + # memory inputs were removed from the public API – delete the helper altogethersrc/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)
1777-1810: Removelogger; avoid leaking PIITwo
print(...)statements were added (print(self.chat_history)andprint(f"Session ID: ...")).
In a server context this dumps full chat history—including possible PII—to stdout, bypassing the project’s logging level/rotation controls.- print(self.chat_history) + logger.debug("Chat history loaded (%d messages)", len(self.chat_history) if self.chat_history else 0) ... - print(f"Session ID: {self.graph.session_id}") + logger.debug("Retrieving chat history for session_id=%s", self.graph.session_id)Replace raw prints with
logger.debug/infoas above.src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (1)
485-520: Printing full chat history is a privacy & log-noise hazardThe
print(self.chat_history)line writes potentially sensitive user content to stdout.
Stdout in production often ends up in aggregated logs where PII retention policies are hard to enforce.- self.chat_history = await self.get_memory_data() - print(self.chat_history) - logger.info(f"Chat history: {self.chat_history}") + self.chat_history = await self.get_memory_data() + logger.debug("Loaded %s chat messages", len(self.chat_history))Switch to a concise
logger.debug(or drop entirely) and avoid dumping raw messages.
Same pattern appears in everyAgentComponentclone—please scrub them all.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
1168-1188:get_memory_dataignores user-supplied memory options and uses
The call to
MemoryComponent().set(...)only forwardssession_id.
•order,n_messages,memory,sender, etc. — now exposed as first-class inputs in the template — are silently discarded, so the user’s UI settings have no effect.
• This is a functional regression: chat history length / ordering can no longer be controlled.A raw
- print(f"Session ID: {self.graph.session_id}") - return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + logger.debug("Retrieving chat history (session=%s, n_messages=%s, order=%s)", + self.graph.session_id, self.n_messages, self.order) + + memory_component = ( + MemoryComponent(**self.get_base_args()) + .set( + session_id=self.graph.session_id, + n_messages=self.n_messages, + order=self.order, + memory=self.memory, # external handle if provided + ) + ) + return await memory_component.retrieve_messages()Restoring the missing parameters reinstates advertised functionality and removes the stray
src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (3)
1228-1236:memory_inputsis calculated but never consumed – dead codeThe list-comprehension assigns
memory_inputs, yet all downstream uses have been commented out. Keeping an unused symbol invites confusion and makes static analysis complain.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs] + # memory_inputs removed; drop stale variable
1339-1349:
print(self.chat_history)andprint(f"Session ID: …")should be removed (or switched to structured logging with the proper log-level). Dumping entire chat histories can easily expose sensitive data and will spam server logs.- print(self.chat_history) - ... - print(f"Session ID: {self.graph.session_id}")
1349-1354:n_messagesinput never used – pass it toMemoryComponentYou introduced an explicit
n_messagesfield (see template, line 1447 ff.), butget_memory_dataignores it. Forwarding the value keeps the new UX consistent and avoids fetching unbounded history.- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + return await ( + MemoryComponent(**self.get_base_args()) + .set(session_id=self.graph.session_id, n_messages=self.n_messages) + .retrieve_messages() + )src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (2)
1492-1499: Replace
message_response()now does:print(self.chat_history) logger.info(f"Chat history: {self.chat_history}")Printing from within a library component pollutes stdout, breaks structured logging, and is hard to silence in production.
Use the existing logger at DEBUG level instead:- print(self.chat_history) - logger.info(f"Chat history: {self.chat_history}") + logger.debug("Chat history retrieved: %s", self.chat_history)If INFO visibility is required, emit only the logger call.
1510-1521: Hardenget_memory_data– avoidAttributeErrorand noisy stdoutCurrent code:
print(f"Session ID: {self.graph.session_id}") return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()Issues
self.graphmay be absent ⇒AttributeError.- Falls back to
- Ignores user-provided
session_idattribute when the graph one is missing.- No limit passed (
n_messages), so large histories could be fetched unintentionally.Recommended minimal fix:
- print(f"Session ID: {self.graph.session_id}") - return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + session_id = getattr(self.graph, "session_id", None) or self.session_id + logger.debug("Fetching memory for session_id=%s", session_id) + return await ( + MemoryComponent(**self.get_base_args()) + .set(session_id=session_id, n_messages=self.n_messages) + .retrieve_messages() + )This preserves current behaviour while being safer and quieter.
src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (1)
2023-2050:n_messagesis defined as an input but never honoured byget_memory_dataThe new
IntInput(n_messages) is intended to limit chat-history retrieval, yet the call below ignores it:return await MemoryComponent(**self.get_base_args())\ .set(session_id=self.graph.session_id).retrieve_messages()Retrieving the full history nullifies the purpose of the flag and can hurt performance on long-running sessions.
Suggested patch:
- print(f"Session ID: {self.graph.session_id}") - return await MemoryComponent(**self.get_base_args())\ - .set(session_id=self.graph.session_id).retrieve_messages() + # Limit the amount of history fetched to avoid excessive context size + return await ( + MemoryComponent(**self.get_base_args()) + .set(session_id=self.graph.session_id, n_messages=self.n_messages) + .retrieve_messages() + )(Also drops the
src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1)
1358-1400: Avoidn_messagesThe newly-added debug statements use bare
get_memory_datasilently ignores the user-configurablen_messageslimit.
Use the project logger for observability (already imported) and pass the limit through toMemoryComponentto prevent unbounded history retrieval.- print(self.chat_history) - logger.info(f"Chat history: {self.chat_history}") + logger.debug("Chat history: %s", self.chat_history) - print(f"Session ID: {self.graph.session_id}") - return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + logger.debug("Session ID: %s", self.graph.session_id) + return await MemoryComponent(**self.get_base_args()).set( + session_id=self.graph.session_id, + n_messages=self.n_messages, + ).retrieve_messages()This eliminates stdout noise (important for serverless & production logs) and respects the explicit
n_messagescontrol added elsewhere in the refactor.src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)
1525-1545: Replace
print(self.chat_history)andprint(f"Session ID: {self.graph.session_id}")leak to stdout in production, bypass log-level control and break Cloud-run/Lambda style collectors. You already importlogger; use it:- print(self.chat_history) - logger.info(f"Chat history: {self.chat_history}") + logger.debug("Chat history: %s", self.chat_history) ... - print(f"Session ID: {self.graph.session_id}") + logger.debug("Session ID: %s", self.graph.session_id)
1535-1545: Passn_messagesintoMemoryComponentso the new control actually works
n_messageswas introduced as a user-surface knob but isn’t forwarded:-return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() +return await ( + MemoryComponent(**self.get_base_args()) + .set(session_id=self.graph.session_id, n_messages=self.n_messages) + .retrieve_messages() +)Without this change the component silently ignores the user-selected limit.
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)
1365-1385:get_memory_dataignores user-controlled filters (n_messages,order, etc.)The new implementation only forwards
session_id, dropping the remaining inputs (order, sender filters, message limit). Users changing those fields in the UI will see no effect.- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + session_id = getattr(self.graph, "session_id", None) or self.session_id + + mem_component = ( + MemoryComponent(**self.get_base_args()) + .set( + session_id=session_id, + order=self.order, + n_messages=self.n_messages, + sender=self.sender, + sender_name=self.sender_name, + ) + ) + return await mem_component.retrieve_messages()This keeps the simplified API (no dynamic input list) while still honouring the explicit fields surfaced on the Agent node.
♻️ Duplicate comments (2)
src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (2)
2788-2815: Same issues as commented above – the Local Expert Agent duplicates the exact implementation. Please apply the fixes to this block as well.
3553-3580: Same issues as commented above – the Travel Concierge Agent duplicates the exact implementation. Please apply the fixes to this block as well.
🧹 Nitpick comments (22)
src/backend/base/langflow/base/agents/agent.py (1)
142-142: Consider using debug level for chat history loggingWhile logging chat history is valuable for debugging, consider using
logger.debug()instead oflogger.info()to avoid cluttering production logs with potentially large chat history data.- logger.info(f"Chat history: {self.chat_history}") + logger.debug(f"Chat history: {self.chat_history}")src/backend/base/langflow/components/helpers/memory.py (1)
228-228: Fix typing cast annotationThe static analyzer suggests adding quotes to the type expression in the cast for better type safety.
- return cast(Data, stored) + return cast("Data", stored)src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (3)
901-908: Replaceprint(self.chat_history)with structured loggingRaw
Use the project logger at a suitable level instead:- print(self.chat_history) - logger.info(f"Chat history: {self.chat_history}") + logger.debug("Chat history: %s", self.chat_history)
930-934: Remove ad-hocSame rationale as above—use the logger or drop entirely:
- print(f"Session ID: {self.graph.session_id}") + logger.debug("Session ID: %s", self.graph.session_id)
890-920: Minor: comment describing removed validation is outdatedThe inline note
# note the tools are not required to run the agent, hence the validation removed.is ambiguous now that validation logic moved earlier. Consider either deleting or re-phrasing to avoid confusion.
src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)
1777-1810:memory_inputsbecomes dead code – delete or reinstate
memory_inputsis still built:memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]but the only consumer (
*memory_inputs) has been commented out. Linters will flag this as unused variable and it confuses readers.Options:
- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]or, if the variable will return soon, prefix with underscore and add a short comment.
Removing keeps the class clean and prevents accidental drift.
src/backend/base/langflow/initial_setup/starter_projects/Social Media Agent.json (2)
1457-1457: Debug
message_response()andget_memory_data()contain bareprint()calls (print(self.chat_history),print(f"Session ID: …")).
With the globalloggeralready imported, switch tologger.debug(or drop them) to avoid noisy console output when Langflow is embedded or run under Gunicorn/Uvicorn.
1457-1457: Dead code / unused variable:memory_inputsYou still build
memory_inputs = [set_advanced_true(...) for ...]but later commented out
*memory_inputsininputs.
That comprehension now executes on every import for no reason. Delete it or re-enable the feature to avoid wasted cycles and mental overhead.src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (2)
505-510:memory_inputsis now dead codeAfter commenting-out
*memory_inputsininputs, thememory_inputslist is never consumed.
Delete the variable to avoid confusion and save import time forMemoryComponent.
485-700: Heavy code duplication across three AgentComponentsThe entire Python payload for Finance/Analysis/Research agents is identical.
Consider extracting this logic into a single reusable component to:
- cut maintenance cost
- eliminate copy-paste bugs (e.g., the stray
- centralize future fixes (memory handling, logging, etc.)
Refactor suggestion: create a mix-in or base
SequentialAgentComponentin a.pyfile and have the JSON nodes reference it.src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)
1135-1145:memory_inputslist is now redundant and can be removed
memory_inputsis still built even though the list is no longer injected intoinputsand is only referenced in dead-code comments below. Retaining unused symbols confuses maintenance and bloats the byte-code.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]Safe to drop the line altogether.
src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (2)
1242-1253: Stale comments keep the intent unclearThe
inputslist still contains the commented placeholder# *memory_inputs. After the refactor, this line should be deleted altogether so that future maintainers don’t wonder whether it was forgotten.
1447-1464:n_messagesdefault of 100 is undocumented for usersThe UI exposes “Number of Messages” but the tooltip only says “Number of messages to retrieve.” Consider adding the default value to the info text or making
valueempty so that the backend decides.src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (1)
1475-1486: Removememory_inputshelper – it’s now dead code
memory_inputsis still computed but no longer injected intoinputs.
Keeping an unused comprehension:memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]needlessly instantiates
MemoryComponentduring class import and confuses future maintainers.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]followed by the commented-out expansion can be deleted entirely.
This is a safe, behaviour-neutral cleanup.src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (2)
2023-2050: Debug
print(self.chat_history)andprint(f"Session ID: {…}")mix user data into stdout.
They bypass the structuredloggeralready in place and may surface PII in production logs.-print(self.chat_history) -… -print(f"Session ID: {self.graph.session_id}") +logger.debug(self.chat_history) +… +# No need to log the session id separately; it is already present in the +# subsequent history-dump log entry.
2023-2050:memory_inputslist is now dead code
memory_inputsis still computed but no longer used after the refactor (the expansion ininputsis commented out). Remove it to avoid confusion:- memory_inputs = [set_advanced_true(component_input) - for component_input in MemoryComponent().inputs]src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1)
1358-1375: Remove now-unusedmemory_inputsartefacts
memory_inputsis still created:memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]but no longer referenced (the list is commented out in
inputs). Leaving it behind is dead code and risks future confusion.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]Delete the line (and the adjacent commented
# *memory_inputs,) to keep the component definition lean.src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)
1489-1505:memory_inputslist is now orphan-code – remove to avoid dead weightYou still create
memory_inputs = [...]but never reference it after commenting out*memory_inputsininputs.
Keeping mutated but unused global data structures:
- adds cognitive overhead for future maintainers who will wonder “why is this here?”,
- incurs an unnecessary
MemoryComponent()construction at import time.-class AgentComponent(ToolCallingAgentComponent): - ... - memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs] +class AgentComponent(ToolCallingAgentComponent): + ... # (delete the line above completely)
1510-1520: Stale commented code – delete to keep repo leanLarge commented blocks (
# memory_kwargs …) linger inget_memory_data. They no longer document behaviour but clutter an already long JSON blob. Recommend excising them unless you plan to resurrect the feature.src/backend/base/langflow/components/agents/agent.py (1)
29-29: Fix formatting issue and consider maintainability.Static analysis correctly identifies a formatting issue - there should be 2 blank lines after the function definition above.
Additionally, the hardcoded provider list could become out of sync if providers are added elsewhere. Consider whether this should reference a centralized constant.
Apply this diff to fix the formatting:
+ MODEL_PROVIDERS_LIST = ["Anthropic", "Google Generative AI", "Groq", "OpenAI"]src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)
1315-1330:memory_inputsis now dead code – remove to avoid confusion and linter noise
memory_inputsis still declared but, after commenting-out*memory_inputsfrom theinputslist, it is never referenced again.
Leaving unused variables around is a maintenance smell and will trigger flake8/pylint warnings.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]Simply delete the line (or the whole comprehension if you plan to re-enable it later).
1335-1355: Replaceprint(...)with proper loggingTwo plain
print(self.chat_history)inmessage_responseandprint(f"Session ID: {self.graph.session_id}")inget_memory_data) bypass the project’s structured logging and will spam stdout in production.- print(self.chat_history) + logger.debug("Chat history: %s", self.chat_history)- print(f"Session ID: {self.graph.session_id}") + logger.debug("Session ID: %s", self.graph.session_id)Using the shared logger keeps output consistent and makes it filterable by log level.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
src/backend/base/langflow/base/agents/agent.py(1 hunks)src/backend/base/langflow/components/agents/agent.py(6 hunks)src/backend/base/langflow/components/helpers/memory.py(2 hunks)src/backend/base/langflow/custom/custom_component/component.py(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Financial Agent.json(2 hunks)src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Market Research.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Search agent.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Social Media Agent.json(1 hunks)src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json(3 hunks)src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`src/backend/**/*component*.py`: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/**/*component*.py: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/base/langflow/custom/custom_component/component.py
`src/backend/base/langflow/components/**/*`: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/.
src/backend/base/langflow/components/**/*: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/.
src/backend/base/langflow/components/helpers/memory.pysrc/backend/base/langflow/components/agents/agent.py
`src/backend/**/components/**/*.py`: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/**/components/**/*.py: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).
src/backend/base/langflow/components/helpers/memory.pysrc/backend/base/langflow/components/agents/agent.py
🪛 Ruff (0.11.9)
src/backend/base/langflow/components/helpers/memory.py
219-219: print found
Remove print
(T201)
221-221: print found
Remove print
(T201)
228-228: Add quotes to type expression in typing.cast()
Add quotes
(TC006)
🪛 Flake8 (7.2.0)
src/backend/base/langflow/components/agents/agent.py
[error] 29-29: expected 2 blank lines after class or function definition, found 1
(E305)
🔇 Additional comments (24)
src/backend/base/langflow/custom/custom_component/component.py (1)
218-218: LGTM: Session ID standardizationThis change correctly standardizes session ID retrieval to use
self.graph.session_idinstead ofself.session_id, aligning with the broader refactoring effort for consistent session-based memory handling across the system.src/backend/base/langflow/base/agents/agent.py (1)
143-146: LGTM: Type-aware chat history processingThe conditional logic correctly handles different chat history types:
Dataobjects are converted directly usingdata_to_messages()- Lists of
Messageobjects are converted toDatafirst, then to messagesThis type-aware approach ensures robust handling of various chat history formats.
src/backend/base/langflow/components/helpers/memory.py (4)
79-79: LGTM: Improved UI visibilityMaking the "Number of Messages" input visible (
show=True) improves user experience by exposing this important control in the UI, aligning with the standardization effort.
227-227: Clarify commented status assignmentThe commented
self.status = storedassignment should either be removed if intentional or uncommented with explanation. If this is temporary for debugging, add a TODO comment explaining when it should be restored.Could you clarify whether this status assignment should be removed permanently or if there's a specific reason it's commented out?
232-232: Clarify commented status assignmentSimilar to line 227, this commented
self.status = stored_textassignment needs clarification about its intended state.Is this status assignment intentionally disabled, or should it be restored?
200-226: I’ll pull the actual code around those lines to confirm both branches use the same slicing logic and produce consistent ordering.#!/bin/bash sed -n '180,260p' src/backend/base/langflow/components/helpers/memory.pysrc/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (1)
924-939: Validate the type returned byretrieve_messages()before passing to the agent
MemoryComponent().retrieve_messages()returnslist[Message], while the upstreamToolCallingAgentComponent.set(..., chat_history=...)historically expectslist[str](see its signature).
If the base class hasn’t been updated accordingly this will raise at runtime.Please confirm and, if needed, convert to text:
history = await MemoryComponent(...).set(session_id=self.graph.session_id).retrieve_messages() if history and isinstance(history[0], Message): history = [msg.text for msg in history] self.chat_history = historysrc/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)
1777-1810:retrieve_messages()may return rawMessageobjects – double-check downstream expectationsPreviously the call was
retrieve_messages_as_text(), feeding a list of strings to the agent.
Now:return await MemoryComponent(...).set(session_id=...).retrieve_messages()returns
list[Message](or similar). Ensureself.set(..., chat_history=self.chat_history, ...)and the LLM/agent chain can consume this type without additional conversion; otherwise transform to the expected schema before passing.src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (2)
520-550: Session-ID debug
print(f"Session ID: {self.graph.session_id}")is another stray stdout call.
Either log with the existing logger atdebuglevel or remove.- print(f"Session ID: {self.graph.session_id}") + logger.debug("Using session_id=%s", self.graph.session_id)[ suggest_nitpick ]
540-560:n_messages,order, … are ignored when retrieving memory
get_memory_data()forwards onlysession_id:MemoryComponent(...).set(session_id=...).retrieve_messages()Inputs
n_messages,order,sender, etc. remain unused, making the new UI controls ineffective.-return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() +return await ( + MemoryComponent(**self.get_base_args()) + .set( + session_id=self.graph.session_id, + n_messages=self.n_messages, + order=self.order, + sender=self.sender if hasattr(self, "sender") else None, + ) + .retrieve_messages() +)Please ensure the MemoryComponent signature supports these args.
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)
1298-1308:memory_inputsis now dead code – drop it
memory_inputsis still defined but no longer used after commenting out*memory_inputsininputs.
Keeping unused variables hurts readability and will trigger linters.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs][ suggest_nitpick ]
1340-1356:get_memory_datacan raiseAttributeErrorwhengraphorsession_idis missing
self.graph.session_idis accessed directly. When the component is executed outside of a flow context (tests, CLI, etc.)graphmight be absent orsession_idcould beNone, leading to a hard crash.- print(f"Session ID: {self.graph.session_id}") - return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + session_id = getattr(getattr(self, "graph", None), "session_id", None) + if not session_id: + logger.warning("Session ID not found – falling back to default memory scope") + return await ( + MemoryComponent(**self.get_base_args()) + .set(session_id=session_id) + .retrieve_messages() + )[ suggest_essential_refactor ]
1336-1343: ReplaceRaw
- print(self.chat_history) + logger.debug(self.chat_history)and
- print(f"Session ID: {self.graph.session_id}") + logger.debug(f"Session ID: {self.graph.session_id}")[ suggest_nitpick ]
1315-1321: Clean up large commented-out blockThe legacy dynamic–memory logic is commented but left in the codebase. Keeping multi-line commented code makes maintenance harder; prefer removing it or moving to VCS history.
[ suggest_optional_refactor ]
src/backend/base/langflow/components/agents/agent.py (6)
19-19: Import addition looks good.The addition of
IntInputto support the newn_messagesfield is appropriate and necessary.
45-45: Usage of MODEL_PROVIDERS_LIST looks correct.The replacement of the previous sorted set approach with the new constant is implemented correctly in both the options and options_metadata.
Also applies to: 49-49
59-66: Well-designed input field for message count control.The new
n_messagesIntInput provides explicit control over chat history retrieval with sensible defaults:
- Default of 100 messages should handle most use cases
- Properly marked as advanced but visible
- Clear, descriptive info text
This replaces the previous dynamic memory input approach effectively.
68-69: Appropriate removal of dynamic memory inputs.The commented-out memory inputs align with the refactor objectives to standardize memory handling using session IDs instead of dynamic configuration.
91-92: Good defensive programming for chat history validation.The validation ensures that
chat_historyis always a list, handling cases whereget_memory_data()might return a singleMessageinstance. This prevents potential type errors downstream.
127-131: Simplified memory retrieval looks correct.The refactored
get_memory_data()method is much cleaner and aligns with the standardization objectives:
- Uses
self.graph.session_idfor session-based retrieval- Leverages the new
n_messagesparameter for explicit control- Sets consistent "Ascending" order
- Removes complex dynamic memory input filtering
This should be more reliable and maintainable than the previous approach.
src/backend/base/langflow/initial_setup/starter_projects/Financial Agent.json (4)
2507-2524:memory_inputsis now dead code – remove it to avoid confusion
memory_inputsis still built fromMemoryComponent().inputs, yet it is never used after the recent refactor (the only reference is inside a commented-out block).
Keeping the list around inflates startup time (instantiation of a heavy component) and invites future mistakes when developers try to “revive” it.- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs][ suggest_essential_refactor ]
2565-2578: Printing sensitive data to stdout – replace with a logger .debugTwo
print()statements were introduced:
print(self.chat_history)print(f"Session ID: {self.graph.session_id}")Plain prints
• leak PII/chat data to container logs,
• bypass the project-wide log level,
• are expensive when chat history grows.- print(self.chat_history) - ... - print(f"Session ID: {self.graph.session_id}") + logger.debug("Chat history retrieved (len=%s)", len(self.chat_history)) + ... + logger.debug("Session ID resolved: %s", self.graph.session_id)[ raise_critical_issue ]
2578-2594:get_memory_dataignores user-configurable filters (n_messages,order, …)After the refactor only
session_idis forwarded toMemoryComponent.
Fields still present in the template (n_messages,order,sender,sender_name) are silently ignored, so the UI settings have no effect.Consider forwarding the relevant arguments:
- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() + return await ( + MemoryComponent(**self.get_base_args()) + .set( + session_id=self.graph.session_id, + n_messages=self.n_messages, + order=self.order, + sender=self.sender or None, + sender_name=self.sender_name or None, + ) + .retrieve_messages() + )Failing to do so can cause very large histories to be loaded unintentionally and break paging logic.
[ suggest_essential_refactor ]
3290-3310: Exact code duplication between the two Agent componentsThe full 300-line Python payload under both “Agent” nodes (
Agent-J7aBZandAgent-oCAnt) is identical.
Keeping two divergent copies will inevitably lead to one being patched while the other drifts.Extract the class into a reusable module and reference it from the JSON, or at minimum keep only one copy and let the other node import it.
[ offer_architecture_advice ]
| print("len stored Before", len(stored)) | ||
| stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages] | ||
| print("len stored After", len(stored)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove debug print statements
Debug print statements should be removed before merging to production. These can clutter logs and expose internal state information.
- print("len stored Before", len(stored))
stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages]
- print("len stored After", len(stored))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print("len stored Before", len(stored)) | |
| stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages] | |
| print("len stored After", len(stored)) | |
| stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages] |
🧰 Tools
🪛 Ruff (0.11.9)
219-219: print found
Remove print
(T201)
221-221: print found
Remove print
(T201)
🤖 Prompt for AI Agents
In src/backend/base/langflow/components/helpers/memory.py around lines 219 to
221, remove the debug print statements that output the length of the stored list
before and after slicing. These print statements are for debugging purposes and
should be deleted to avoid cluttering production logs and exposing internal
state.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n memory_kwargs = {\n component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n }\n # filter out empty values\n memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n" | ||
| "value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n_messages input never used – history size is now uncontrolled
You introduced an n_messages IntInput (line 1694) but get_memory_data() (inside the value code blob) calls
await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()without forwarding n_messages.
Users will assume the slider limits the retrieved history, yet the component will ignore it and pull the full table (or whatever default the helper uses), hurting performance and blowing out context length with some LLMs.
- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+ return await (
+ MemoryComponent(**self.get_base_args())
+ .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+ .retrieve_messages()
+ )Please propagate the parameter (and consider order, sender, etc. if you still want that flexibility).
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Social Media
Agent.json at line 1457, the get_memory_data() method does not use the
n_messages input to limit the size of the retrieved chat history, causing
uncontrolled history size and potential performance issues. To fix this, modify
get_memory_data() to accept and forward the n_messages parameter (and optionally
order, sender, etc.) when calling MemoryComponent().set().retrieve_messages(),
ensuring the retrieved memory respects the user's input limit.
| "title_case": false, | ||
| "type": "code", | ||
| "value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n memory_kwargs = {\n component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n }\n # filter out empty values\n memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n" | ||
| "value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove dead code, use proper logging, and honour the n_messages parameter
Inside the embedded AgentComponent:
memory_inputsis still declared but no longer used after the dynamic-memory refactor.- Two bare
print()calls were added for debugging. These bypass the project‐wide logging facilities and will pollute stdout in production. self.n_messageswas introduced in the UI schema yet never forwarded toMemoryComponent, so the new slider has no effect.
A small clean-up addresses all three:
- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+ # Dynamic memory inputs were removed; keep the codebase tidy.
+ # memory_inputs = [...] (deleted – no longer required)
@@
- self.chat_history = await self.get_memory_data()
- print(self.chat_history)
- logger.info(f"Chat history: {self.chat_history}")
+ self.chat_history = await self.get_memory_data()
+ logger.debug(f"Chat history: {self.chat_history}")
@@
- print(f"Session ID: {self.graph.session_id}")
- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+ logger.debug(f"Session ID: {self.graph.session_id}")
+ return await (
+ MemoryComponent(**self.get_base_args())
+ .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+ .retrieve_messages()Benefits: clearer code, consistent logging, and the UI control for message count actually works.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n" | |
| --- a/src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json | |
| +++ b/src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json | |
| @@ class AgentComponent(ToolCallingAgentComponent): | |
| - memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs] | |
| + # Dynamic memory inputs were removed; keep the codebase tidy. | |
| + # memory_inputs = [...] (deleted – no longer required) | |
| @@ async def message_response(self) -> Message: | |
| - self.chat_history = await self.get_memory_data() | |
| - print(self.chat_history) | |
| - logger.info(f"Chat history: {self.chat_history}") | |
| + self.chat_history = await self.get_memory_data() | |
| + logger.debug(f"Chat history: {self.chat_history}") | |
| @@ async def get_memory_data(self): | |
| - print(f"Session ID: {self.graph.session_id}") | |
| - return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages() | |
| + logger.debug(f"Session ID: {self.graph.session_id}") | |
| + return await ( | |
| + MemoryComponent(**self.get_base_args()) | |
| + .set(session_id=self.graph.session_id, n_messages=self.n_messages) | |
| + .retrieve_messages() | |
| + ) |
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json
at line 1042 within the AgentComponent class, remove the unused memory_inputs
declaration to eliminate dead code, replace all print() statements with
appropriate logger calls to maintain consistent logging practices, and modify
the call to MemoryComponent in get_memory_data to pass self.n_messages so that
the UI slider controlling message count is respected and functional.
|
@coderabbitai Please review the PR Do not the JSONs |
|
Tip For best results, initiate chat on the files or code changes. @edwinjosechittilappilly, I'll review the Python files in this PR while skipping the JSON files as requested. This aligns with the preference that JSON files are autogenerated and don't require review. Code Review🔍 Overall AssessmentThis PR implements a significant refactoring of agent and memory handling logic. The changes standardize memory retrieval using session IDs and improve chat history processing. The refactoring appears well-coordinated across multiple components. 📝 Key Changes Reviewed1. Agent Base Class (
|
ogabrielluiz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
I'm not sure about the logger.info call, though. If an agent is being called in a production environment, we will see way too many of those logs.
Good Catch! I added those for debug! Let me chang it to logger.debug! |
mfortman11
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM just a couple nits
| input_dict["system_prompt"] = self.system_prompt | ||
| if hasattr(self, "chat_history") and self.chat_history: | ||
| input_dict["chat_history"] = data_to_messages(self.chat_history) | ||
| logger.info(f"Chat history: {self.chat_history}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cleanup logs
| # removed memory inputs from agent component | ||
| # *memory_inputs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are these needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to keep track since it is a major change. We can remove the comments if required.
…logic (#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]>
…logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]>
…ity attribute (#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…ity attribute (#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]>
…ity attribute (langflow-ai#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (langflow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
…logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]>
…ity attribute (langflow-ai#8667) * Update styleUtils.ts * update to prompt component * update to template * update to mcp component * update to smart function * [autofix.ci] apply automated fixes * update to templates * fix sidebar * change name * update import * update import * update import * [autofix.ci] apply automated fixes * fix import * fix ollama * fix ruff * refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715) * update chat history * update to agents * Update Simple Agent.json * update to templates * ruff errors * Update agent.py * Update test_agent_component.py * [autofix.ci] apply automated fixes * update templates * test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: Mike Fortman <[email protected]> * fix prompt change * feat(message): support sequencing of multiple streamable models (langflow-ai#8434) * feat: update OpenAI model parameters handling for reasoning models * feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator * refactor: remove assert_streaming_sequence method and related checks from Graph class * feat: add consume_iterator method to Message class for handling iterators * test: add unit tests for OpenAIModelComponent functionality and integration * feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method * feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text * feat: add is_connected_to_chat_output method to Component class for improved message handling * feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration * refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling * fix: update import paths for input components in multiple starter project JSON files * fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes * refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing * fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic * refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling * refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency * feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management * feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration * feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats * test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling * test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models * fix: reorder JSON properties for consistency in starter projects * Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability. * Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json. * refactor: simplify input_value type in LCModelComponent * Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability. * This change enhances the documentation and understanding of the expected input types for the component. * fix: clarify comment for handling source in Component class * refactor: remove unnecessary mocking in OpenAI model integration tests * auto update * update * [autofix.ci] apply automated fixes * fix openai import * revert template changes * test fixes * update templates * [autofix.ci] apply automated fixes * fix tests * fix order * fix prompts import * fix frontend tests * fix frontend * [autofix.ci] apply automated fixes * add charmander * [autofix.ci] apply automated fixes * fix prompt frontend * fix frontend * test fix * [autofix.ci] apply automated fixes * change pokedex * remove pokedex extra * update template * name fix * update template * mcp test fix --------- Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> Co-authored-by: cristhianzl <[email protected]> Co-authored-by: Yuqi Tang <[email protected]> Co-authored-by: Mike Fortman <[email protected]> Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Documentation
Chores