Skip to content

Conversation

@edwinjosechittilappilly
Copy link
Collaborator

@edwinjosechittilappilly edwinjosechittilappilly commented Jun 24, 2025

Summary by CodeRabbit

  • New Features

    • Added an option to control the number of chat history messages retrieved for agents.
    • Made the "Number of Messages" input visible in memory settings.
  • Improvements

    • Simplified memory retrieval for agents by using session-based history instead of dynamic memory input fields.
    • Standardized model provider selection in agent setup.
    • Enhanced chat history handling and logging for better transparency.
  • Bug Fixes

    • Ensured consistent handling of single and multiple chat history messages.
  • Documentation

    • Updated descriptions and prompts for several starter agent projects.
  • Chores

    • Removed unused or redundant memory input fields from agent configurations.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jun 24, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 24, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This update refactors the agent and memory handling logic across the backend, agent components, and multiple starter project configurations. It removes dynamic memory input fields from agent components, standardizes memory retrieval using session IDs, updates chat history handling, and introduces explicit message count control. Debug logging for chat history is also added.

Changes

File(s) Change Summary
src/backend/base/langflow/base/agents/agent.py Refined chat_history handling in run_agent to differentiate conversion logic based on type and content, with added logging.
src/backend/base/langflow/components/agents/agent.py Added explicit model provider list, introduced integer input for message count, removed dynamic memory inputs, refactored memory data retrieval, and standardized chat history handling.
src/backend/base/langflow/components/helpers/memory.py Made "Number of Messages" input visible, adjusted message slicing logic, set internal memory fetch limit to 10,000, added debug prints, and commented out status updates.
src/backend/base/langflow/custom/custom_component/component.py Changed session ID retrieval in get_base_args to use self.graph.session_id instead of self.session_id.
src/backend/base/langflow/initial_setup/starter_projects/... (all starter project JSONs) Removed memory_inputs from agent inputs, refactored memory retrieval to use session ID from graph, added debug logging/printing for chat history, and updated internal logic accordingly.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AgentComponent
    participant MemoryComponent

    User->>AgentComponent: Submit input
    AgentComponent->>MemoryComponent: retrieve_messages(session_id, n_messages)
    MemoryComponent-->>AgentComponent: Return chat history
    AgentComponent->>AgentComponent: Log/print chat history
    AgentComponent->>AgentComponent: Run agent logic with chat history
    AgentComponent-->>User: Return response
Loading

Suggested labels

refactor, size:XXL, lgtm

Suggested reviewers

  • edwinjosechittilappilly
  • mfortman11
  • ogabrielluiz
✨ Finishing Touches
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in Comment
  • Commit Unit Tests in branch fix-chat-history

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Jun 24, 2025
@coderabbitai coderabbitai bot changed the title @coderabbitai refactor(agent): standardize memory handling and update chat history logic Jun 24, 2025
@github-actions github-actions bot added the refactor Maintenance tasks and housekeeping label Jun 24, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (14)
src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (1)

870-888: memory_inputs is now dead-code – remove it to avoid confusion

memory_inputs is still being built but never used after you commented out *memory_inputs from the inputs list.
Keeping an unused large list (it clones every MemoryComponent input) increases memory footprint and misleads future readers into thinking the feature is still active.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+    # memory inputs were removed from the public API – delete the helper altogether
src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)

1777-1810: Remove print debugging & rely on logger; avoid leaking PII

Two print(...) statements were added (print(self.chat_history) and print(f"Session ID: ...")).
In a server context this dumps full chat history—including possible PII—to stdout, bypassing the project’s logging level/rotation controls.

-            print(self.chat_history)
+            logger.debug("Chat history loaded (%d messages)", len(self.chat_history) if self.chat_history else 0)
...
-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug("Retrieving chat history for session_id=%s", self.graph.session_id)

Replace raw prints with logger.debug/info as above.

src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (1)

485-520: Printing full chat history is a privacy & log-noise hazard

The print(self.chat_history) line writes potentially sensitive user content to stdout.
Stdout in production often ends up in aggregated logs where PII retention policies are hard to enforce.

-            self.chat_history = await self.get_memory_data()
-            print(self.chat_history)
-            logger.info(f"Chat history: {self.chat_history}")
+            self.chat_history = await self.get_memory_data()
+            logger.debug("Loaded %s chat messages", len(self.chat_history))

Switch to a concise logger.debug (or drop entirely) and avoid dumping raw messages.
Same pattern appears in every AgentComponent clone—please scrub them all.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

1168-1188: get_memory_data ignores user-supplied memory options and uses print instead of logging

  1. The call to MemoryComponent().set(...) only forwards session_id.
    order, n_messages, memory, sender, etc. — now exposed as first-class inputs in the template — are silently discarded, so the user’s UI settings have no effect.
    • This is a functional regression: chat history length / ordering can no longer be controlled.

  2. A raw print is left in the async path; production code should rely on the project logger to avoid stdout noise and to respect logging configuration.

-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        logger.debug("Retrieving chat history (session=%s, n_messages=%s, order=%s)",
+                     self.graph.session_id, self.n_messages, self.order)
+
+        memory_component = (
+            MemoryComponent(**self.get_base_args())
+            .set(
+                session_id=self.graph.session_id,
+                n_messages=self.n_messages,
+                order=self.order,
+                memory=self.memory,  # external handle if provided
+            )
+        )
+        return await memory_component.retrieve_messages()

Restoring the missing parameters reinstates advertised functionality and removes the stray print.

src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (3)

1228-1236: memory_inputs is calculated but never consumed – dead code

The list-comprehension assigns memory_inputs, yet all downstream uses have been commented out. Keeping an unused symbol invites confusion and makes static analysis complain.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+    # memory_inputs removed; drop stale variable

1339-1349: print statements leak PII and pollute stdout

print(self.chat_history) and print(f"Session ID: …") should be removed (or switched to structured logging with the proper log-level). Dumping entire chat histories can easily expose sensitive data and will spam server logs.

-        print(self.chat_history)
-        ...
-        print(f"Session ID: {self.graph.session_id}")

1349-1354: n_messages input never used – pass it to MemoryComponent

You introduced an explicit n_messages field (see template, line 1447 ff.), but get_memory_data ignores it. Forwarding the value keeps the new UX consistent and avoids fetching unbounded history.

-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+            .retrieve_messages()
+        )
src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (2)

1492-1499: Replace print with proper logging

message_response() now does:

print(self.chat_history)
logger.info(f"Chat history: {self.chat_history}")

Printing from within a library component pollutes stdout, breaks structured logging, and is hard to silence in production.
Use the existing logger at DEBUG level instead:

-            print(self.chat_history)
-            logger.info(f"Chat history: {self.chat_history}")
+            logger.debug("Chat history retrieved: %s", self.chat_history)

If INFO visibility is required, emit only the logger call.


1510-1521: Harden get_memory_data – avoid AttributeError and noisy stdout

Current code:

print(f"Session ID: {self.graph.session_id}")
return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()

Issues

  1. self.graph may be absent ⇒ AttributeError.
  2. Falls back to print, see previous comment.
  3. Ignores user-provided session_id attribute when the graph one is missing.
  4. No limit passed (n_messages), so large histories could be fetched unintentionally.

Recommended minimal fix:

-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        session_id = getattr(self.graph, "session_id", None) or self.session_id
+        logger.debug("Fetching memory for session_id=%s", session_id)
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=session_id, n_messages=self.n_messages)
+            .retrieve_messages()
+        )

This preserves current behaviour while being safer and quieter.

src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (1)

2023-2050: n_messages is defined as an input but never honoured by get_memory_data

The new IntInput (n_messages) is intended to limit chat-history retrieval, yet the call below ignores it:

return await MemoryComponent(**self.get_base_args())\
    .set(session_id=self.graph.session_id).retrieve_messages()

Retrieving the full history nullifies the purpose of the flag and can hurt performance on long-running sessions.

Suggested patch:

-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args())\
-            .set(session_id=self.graph.session_id).retrieve_messages()
+        # Limit the amount of history fetched to avoid excessive context size
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+            .retrieve_messages()
+        )

(Also drops the print, see next comment.)

src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1)

1358-1400: Avoid print debugging – replace with structured logging and honour n_messages

The newly-added debug statements use bare print, and get_memory_data silently ignores the user-configurable n_messages limit.
Use the project logger for observability (already imported) and pass the limit through to MemoryComponent to prevent unbounded history retrieval.

-            print(self.chat_history)
-            logger.info(f"Chat history: {self.chat_history}")
+            logger.debug("Chat history: %s", self.chat_history)

-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        logger.debug("Session ID: %s", self.graph.session_id)
+        return await MemoryComponent(**self.get_base_args()).set(
+            session_id=self.graph.session_id,
+            n_messages=self.n_messages,
+        ).retrieve_messages()

This eliminates stdout noise (important for serverless & production logs) and respects the explicit n_messages control added elsewhere in the refactor.

src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)

1525-1545: Replace print debugging with structured logging

print(self.chat_history) and print(f"Session ID: {self.graph.session_id}") leak to stdout in production, bypass log-level control and break Cloud-run/Lambda style collectors. You already import logger; use it:

-        print(self.chat_history)
-        logger.info(f"Chat history: {self.chat_history}")
+        logger.debug("Chat history: %s", self.chat_history)
...
-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug("Session ID: %s", self.graph.session_id)

1535-1545: Pass n_messages into MemoryComponent so the new control actually works

n_messages was introduced as a user-surface knob but isn’t forwarded:

-return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+return await (
+    MemoryComponent(**self.get_base_args())
+    .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+    .retrieve_messages()
+)

Without this change the component silently ignores the user-selected limit.

src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1)

1365-1385: get_memory_data ignores user-controlled filters (n_messages, order, etc.)

The new implementation only forwards session_id, dropping the remaining inputs (order, sender filters, message limit). Users changing those fields in the UI will see no effect.

-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        session_id = getattr(self.graph, "session_id", None) or self.session_id
+
+        mem_component = (
+            MemoryComponent(**self.get_base_args())
+            .set(
+                session_id=session_id,
+                order=self.order,
+                n_messages=self.n_messages,
+                sender=self.sender,
+                sender_name=self.sender_name,
+            )
+        )
+        return await mem_component.retrieve_messages()

This keeps the simplified API (no dynamic input list) while still honouring the explicit fields surfaced on the Agent node.

♻️ Duplicate comments (2)
src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (2)

2788-2815: Same issues as commented above – the Local Expert Agent duplicates the exact implementation. Please apply the fixes to this block as well.


3553-3580: Same issues as commented above – the Travel Concierge Agent duplicates the exact implementation. Please apply the fixes to this block as well.

🧹 Nitpick comments (22)
src/backend/base/langflow/base/agents/agent.py (1)

142-142: Consider using debug level for chat history logging

While logging chat history is valuable for debugging, consider using logger.debug() instead of logger.info() to avoid cluttering production logs with potentially large chat history data.

-            logger.info(f"Chat history: {self.chat_history}")
+            logger.debug(f"Chat history: {self.chat_history}")
src/backend/base/langflow/components/helpers/memory.py (1)

228-228: Fix typing cast annotation

The static analyzer suggests adding quotes to the type expression in the cast for better type safety.

-        return cast(Data, stored)
+        return cast("Data", stored)
src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (3)

901-908: Replace print(self.chat_history) with structured logging

Raw print leaks PII/chat content to stdout (often collected by container logs) and cannot be filtered by log level.
Use the project logger at a suitable level instead:

-            print(self.chat_history)
-            logger.info(f"Chat history: {self.chat_history}")
+            logger.debug("Chat history: %s", self.chat_history)

930-934: Remove ad-hoc print of session ID

Same rationale as above—use the logger or drop entirely:

-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug("Session ID: %s", self.graph.session_id)

890-920: Minor: comment describing removed validation is outdated

The inline note

# note the tools are not required to run the agent, hence the validation removed.

is ambiguous now that validation logic moved earlier. Consider either deleting or re-phrasing to avoid confusion.

src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)

1777-1810: memory_inputs becomes dead code – delete or reinstate

memory_inputs is still built:

memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

but the only consumer (*memory_inputs) has been commented out. Linters will flag this as unused variable and it confuses readers.

Options:

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

or, if the variable will return soon, prefix with underscore and add a short comment.

Removing keeps the class clean and prevents accidental drift.

src/backend/base/langflow/initial_setup/starter_projects/Social Media Agent.json (2)

1457-1457: Debug print statements will pollute stdout in production

message_response() and get_memory_data() contain bare print() calls (print(self.chat_history), print(f"Session ID: …")).
With the global logger already imported, switch to logger.debug (or drop them) to avoid noisy console output when Langflow is embedded or run under Gunicorn/Uvicorn.


1457-1457: Dead code / unused variable: memory_inputs

You still build

memory_inputs = [set_advanced_true(...) for ...]

but later commented out *memory_inputs in inputs.
That comprehension now executes on every import for no reason. Delete it or re-enable the feature to avoid wasted cycles and mental overhead.

src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (2)

505-510: memory_inputs is now dead code

After commenting-out *memory_inputs in inputs, the memory_inputs list is never consumed.
Delete the variable to avoid confusion and save import time for MemoryComponent.


485-700: Heavy code duplication across three AgentComponents

The entire Python payload for Finance/Analysis/Research agents is identical.
Consider extracting this logic into a single reusable component to:

  • cut maintenance cost
  • eliminate copy-paste bugs (e.g., the stray prints above)
  • centralize future fixes (memory handling, logging, etc.)

Refactor suggestion: create a mix-in or base SequentialAgentComponent in a .py file and have the JSON nodes reference it.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

1135-1145: memory_inputs list is now redundant and can be removed

memory_inputs is still built even though the list is no longer injected into inputs and is only referenced in dead-code comments below. Retaining unused symbols confuses maintenance and bloats the byte-code.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

Safe to drop the line altogether.

src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (2)

1242-1253: Stale comments keep the intent unclear

The inputs list still contains the commented placeholder # *memory_inputs. After the refactor, this line should be deleted altogether so that future maintainers don’t wonder whether it was forgotten.


1447-1464: n_messages default of 100 is undocumented for users

The UI exposes “Number of Messages” but the tooltip only says “Number of messages to retrieve.” Consider adding the default value to the info text or making value empty so that the backend decides.

src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (1)

1475-1486: Remove memory_inputs helper – it’s now dead code

memory_inputs is still computed but no longer injected into inputs.
Keeping an unused comprehension:

memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

needlessly instantiates MemoryComponent during class import and confuses future maintainers.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

followed by the commented-out expansion can be deleted entirely.
This is a safe, behaviour-neutral cleanup.

src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (2)

2023-2050: Debug print statements leak to STDOUT – use the project logger instead

print(self.chat_history) and print(f"Session ID: {…}") mix user data into stdout.
They bypass the structured logger already in place and may surface PII in production logs.

-print(self.chat_history)
-
-print(f"Session ID: {self.graph.session_id}")
+logger.debug(self.chat_history)
+
+# No need to log the session id separately; it is already present in the
+# subsequent history-dump log entry.

2023-2050: memory_inputs list is now dead code

memory_inputs is still computed but no longer used after the refactor (the expansion in inputs is commented out). Remove it to avoid confusion:

-    memory_inputs = [set_advanced_true(component_input)
-                     for component_input in MemoryComponent().inputs]
src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1)

1358-1375: Remove now-unused memory_inputs artefacts

memory_inputs is still created:

memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

but no longer referenced (the list is commented out in inputs). Leaving it behind is dead code and risks future confusion.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

Delete the line (and the adjacent commented # *memory_inputs,) to keep the component definition lean.

src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (2)

1489-1505: memory_inputs list is now orphan-code – remove to avoid dead weight

You still create memory_inputs = [...] but never reference it after commenting out *memory_inputs in inputs.
Keeping mutated but unused global data structures:

  • adds cognitive overhead for future maintainers who will wonder “why is this here?”,
  • incurs an unnecessary MemoryComponent() construction at import time.
-class AgentComponent(ToolCallingAgentComponent):
-    ...
-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+class AgentComponent(ToolCallingAgentComponent):
+    ...
 # (delete the line above completely)

1510-1520: Stale commented code – delete to keep repo lean

Large commented blocks (# memory_kwargs …) linger in get_memory_data. They no longer document behaviour but clutter an already long JSON blob. Recommend excising them unless you plan to resurrect the feature.

src/backend/base/langflow/components/agents/agent.py (1)

29-29: Fix formatting issue and consider maintainability.

Static analysis correctly identifies a formatting issue - there should be 2 blank lines after the function definition above.

Additionally, the hardcoded provider list could become out of sync if providers are added elsewhere. Consider whether this should reference a centralized constant.

Apply this diff to fix the formatting:

+
 MODEL_PROVIDERS_LIST = ["Anthropic", "Google Generative AI", "Groq", "OpenAI"]
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)

1315-1330: memory_inputs is now dead code – remove to avoid confusion and linter noise

memory_inputs is still declared but, after commenting-out *memory_inputs from the inputs list, it is never referenced again.
Leaving unused variables around is a maintenance smell and will trigger flake8/pylint warnings.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

Simply delete the line (or the whole comprehension if you plan to re-enable it later).


1335-1355: Replace print(...) with proper logging

Two plain print calls (print(self.chat_history) in message_response and print(f"Session ID: {self.graph.session_id}") in get_memory_data) bypass the project’s structured logging and will spam stdout in production.

-            print(self.chat_history)
+            logger.debug("Chat history: %s", self.chat_history)
-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug("Session ID: %s", self.graph.session_id)

Using the shared logger keeps output consistent and makes it filterable by log level.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 360c239 and 65a5773.

📒 Files selected for processing (18)
  • src/backend/base/langflow/base/agents/agent.py (1 hunks)
  • src/backend/base/langflow/components/agents/agent.py (6 hunks)
  • src/backend/base/langflow/components/helpers/memory.py (2 hunks)
  • src/backend/base/langflow/custom/custom_component/component.py (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Agent.json (2 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Social Media Agent.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
`src/backend/**/*component*.py`: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).

src/backend/**/*component*.py: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).

  • src/backend/base/langflow/custom/custom_component/component.py
`src/backend/base/langflow/components/**/*`: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/.

src/backend/base/langflow/components/**/*: Add new backend components to the appropriate subdirectory under src/backend/base/langflow/components/.

  • src/backend/base/langflow/components/helpers/memory.py
  • src/backend/base/langflow/components/agents/agent.py
`src/backend/**/components/**/*.py`: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).

src/backend/**/components/**/*.py: In your component class, set the 'icon' attribute to a string matching the frontend icon mapping exactly (case-sensitive).

  • src/backend/base/langflow/components/helpers/memory.py
  • src/backend/base/langflow/components/agents/agent.py
🪛 Ruff (0.11.9)
src/backend/base/langflow/components/helpers/memory.py

219-219: print found

Remove print

(T201)


221-221: print found

Remove print

(T201)


228-228: Add quotes to type expression in typing.cast()

Add quotes

(TC006)

🪛 Flake8 (7.2.0)
src/backend/base/langflow/components/agents/agent.py

[error] 29-29: expected 2 blank lines after class or function definition, found 1

(E305)

🔇 Additional comments (24)
src/backend/base/langflow/custom/custom_component/component.py (1)

218-218: LGTM: Session ID standardization

This change correctly standardizes session ID retrieval to use self.graph.session_id instead of self.session_id, aligning with the broader refactoring effort for consistent session-based memory handling across the system.

src/backend/base/langflow/base/agents/agent.py (1)

143-146: LGTM: Type-aware chat history processing

The conditional logic correctly handles different chat history types:

  • Data objects are converted directly using data_to_messages()
  • Lists of Message objects are converted to Data first, then to messages

This type-aware approach ensures robust handling of various chat history formats.

src/backend/base/langflow/components/helpers/memory.py (4)

79-79: LGTM: Improved UI visibility

Making the "Number of Messages" input visible (show=True) improves user experience by exposing this important control in the UI, aligning with the standardization effort.


227-227: Clarify commented status assignment

The commented self.status = stored assignment should either be removed if intentional or uncommented with explanation. If this is temporary for debugging, add a TODO comment explaining when it should be restored.

Could you clarify whether this status assignment should be removed permanently or if there's a specific reason it's commented out?


232-232: Clarify commented status assignment

Similar to line 227, this commented self.status = stored_text assignment needs clarification about its intended state.

Is this status assignment intentionally disabled, or should it be restored?


200-226: I’ll pull the actual code around those lines to confirm both branches use the same slicing logic and produce consistent ordering.

#!/bin/bash
sed -n '180,260p' src/backend/base/langflow/components/helpers/memory.py
src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (1)

924-939: Validate the type returned by retrieve_messages() before passing to the agent

MemoryComponent().retrieve_messages() returns list[Message], while the upstream ToolCallingAgentComponent.set(..., chat_history=...) historically expects list[str] (see its signature).
If the base class hasn’t been updated accordingly this will raise at runtime.

Please confirm and, if needed, convert to text:

history = await MemoryComponent(...).set(session_id=self.graph.session_id).retrieve_messages()
if history and isinstance(history[0], Message):
    history = [msg.text for msg in history]
self.chat_history = history
src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (1)

1777-1810: retrieve_messages() may return raw Message objects – double-check downstream expectations

Previously the call was retrieve_messages_as_text(), feeding a list of strings to the agent.
Now:

return await MemoryComponent(...).set(session_id=...).retrieve_messages()

returns list[Message] (or similar). Ensure self.set(..., chat_history=self.chat_history, ...) and the LLM/agent chain can consume this type without additional conversion; otherwise transform to the expected schema before passing.

src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (2)

520-550: Session-ID debug print leaks implementation details

print(f"Session ID: {self.graph.session_id}") is another stray stdout call.
Either log with the existing logger at debug level or remove.

-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug("Using session_id=%s", self.graph.session_id)

[ suggest_nitpick ]


540-560: n_messages, order, … are ignored when retrieving memory

get_memory_data() forwards only session_id:

MemoryComponent(...).set(session_id=...).retrieve_messages()

Inputs n_messages, order, sender, etc. remain unused, making the new UI controls ineffective.

-return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+return await (
+    MemoryComponent(**self.get_base_args())
+        .set(
+            session_id=self.graph.session_id,
+            n_messages=self.n_messages,
+            order=self.order,
+            sender=self.sender if hasattr(self, "sender") else None,
+        )
+        .retrieve_messages()
+)

Please ensure the MemoryComponent signature supports these args.

src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (4)

1298-1308: memory_inputs is now dead code – drop it

memory_inputs is still defined but no longer used after commenting out *memory_inputs in inputs.
Keeping unused variables hurts readability and will trigger linters.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

[ suggest_nitpick ]


1340-1356: get_memory_data can raise AttributeError when graph or session_id is missing

self.graph.session_id is accessed directly. When the component is executed outside of a flow context (tests, CLI, etc.) graph might be absent or session_id could be None, leading to a hard crash.

-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        session_id = getattr(getattr(self, "graph", None), "session_id", None)
+        if not session_id:
+            logger.warning("Session ID not found – falling back to default memory scope")
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=session_id)
+            .retrieve_messages()
+        )

[ suggest_essential_refactor ]


1336-1343: Replace print calls with logger for consistency

Raw print statements slip through structured logging and pollute stdout in production.

-            print(self.chat_history)
+            logger.debug(self.chat_history)

and

-        print(f"Session ID: {self.graph.session_id}")
+        logger.debug(f"Session ID: {self.graph.session_id}")

[ suggest_nitpick ]


1315-1321: Clean up large commented-out block

The legacy dynamic–memory logic is commented but left in the codebase. Keeping multi-line commented code makes maintenance harder; prefer removing it or moving to VCS history.

[ suggest_optional_refactor ]

src/backend/base/langflow/components/agents/agent.py (6)

19-19: Import addition looks good.

The addition of IntInput to support the new n_messages field is appropriate and necessary.


45-45: Usage of MODEL_PROVIDERS_LIST looks correct.

The replacement of the previous sorted set approach with the new constant is implemented correctly in both the options and options_metadata.

Also applies to: 49-49


59-66: Well-designed input field for message count control.

The new n_messages IntInput provides explicit control over chat history retrieval with sensible defaults:

  • Default of 100 messages should handle most use cases
  • Properly marked as advanced but visible
  • Clear, descriptive info text

This replaces the previous dynamic memory input approach effectively.


68-69: Appropriate removal of dynamic memory inputs.

The commented-out memory inputs align with the refactor objectives to standardize memory handling using session IDs instead of dynamic configuration.


91-92: Good defensive programming for chat history validation.

The validation ensures that chat_history is always a list, handling cases where get_memory_data() might return a single Message instance. This prevents potential type errors downstream.


127-131: Simplified memory retrieval looks correct.

The refactored get_memory_data() method is much cleaner and aligns with the standardization objectives:

  • Uses self.graph.session_id for session-based retrieval
  • Leverages the new n_messages parameter for explicit control
  • Sets consistent "Ascending" order
  • Removes complex dynamic memory input filtering

This should be more reliable and maintainable than the previous approach.

src/backend/base/langflow/initial_setup/starter_projects/Financial Agent.json (4)

2507-2524: memory_inputs is now dead code – remove it to avoid confusion

memory_inputs is still built from MemoryComponent().inputs, yet it is never used after the recent refactor (the only reference is inside a commented-out block).
Keeping the list around inflates startup time (instantiation of a heavy component) and invites future mistakes when developers try to “revive” it.

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]

[ suggest_essential_refactor ]


2565-2578: Printing sensitive data to stdout – replace with a logger .debug

Two print() statements were introduced:

  1. print(self.chat_history)
  2. print(f"Session ID: {self.graph.session_id}")

Plain prints
• leak PII/chat data to container logs,
• bypass the project-wide log level,
• are expensive when chat history grows.

-            print(self.chat_history)
-            ...
-        print(f"Session ID: {self.graph.session_id}")
+            logger.debug("Chat history retrieved (len=%s)", len(self.chat_history))
+            ...
+        logger.debug("Session ID resolved: %s", self.graph.session_id)

[ raise_critical_issue ]


2578-2594: get_memory_data ignores user-configurable filters (n_messages, order, …)

After the refactor only session_id is forwarded to MemoryComponent.
Fields still present in the template (n_messages, order, sender, sender_name) are silently ignored, so the UI settings have no effect.

Consider forwarding the relevant arguments:

-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(
+                session_id=self.graph.session_id,
+                n_messages=self.n_messages,
+                order=self.order,
+                sender=self.sender or None,
+                sender_name=self.sender_name or None,
+            )
+            .retrieve_messages()
+        )

Failing to do so can cause very large histories to be loaded unintentionally and break paging logic.
[ suggest_essential_refactor ]


3290-3310: Exact code duplication between the two Agent components

The full 300-line Python payload under both “Agent” nodes (Agent-J7aBZ and Agent-oCAnt) is identical.
Keeping two divergent copies will inevitably lead to one being patched while the other drifts.

Extract the class into a reusable module and reference it from the JSON, or at minimum keep only one copy and let the other node import it.

[ offer_architecture_advice ]

Comment on lines 219 to 221
print("len stored Before", len(stored))
stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages]
print("len stored After", len(stored))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove debug print statements

Debug print statements should be removed before merging to production. These can clutter logs and expose internal state information.

-                print("len stored Before", len(stored))
                 stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages]
-                print("len stored After", len(stored))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print("len stored Before", len(stored))
stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages]
print("len stored After", len(stored))
stored = stored[-n_messages:] if order == "ASC" else stored[:n_messages]
🧰 Tools
🪛 Ruff (0.11.9)

219-219: print found

Remove print

(T201)


221-221: print found

Remove print

(T201)

🤖 Prompt for AI Agents
In src/backend/base/langflow/components/helpers/memory.py around lines 219 to
221, remove the debug print statements that output the length of the stored list
before and after slicing. These print statements are for debugging purposes and
should be deleted to avoid cluttering production logs and exposing internal
state.

"title_case": false,
"type": "code",
"value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n memory_kwargs = {\n component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n }\n # filter out empty values\n memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n"
"value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

n_messages input never used – history size is now uncontrolled

You introduced an n_messages IntInput (line 1694) but get_memory_data() (inside the value code blob) calls

await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()

without forwarding n_messages.
Users will assume the slider limits the retrieved history, yet the component will ignore it and pull the full table (or whatever default the helper uses), hurting performance and blowing out context length with some LLMs.

-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+            .retrieve_messages()
+        )

Please propagate the parameter (and consider order, sender, etc. if you still want that flexibility).

🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/Social Media
Agent.json at line 1457, the get_memory_data() method does not use the
n_messages input to limit the size of the retrieved chat history, causing
uncontrolled history size and potential performance issues. To fix this, modify
get_memory_data() to accept and forward the n_messages parameter (and optionally
order, sender, etc.) when calling MemoryComponent().set().retrieve_messages(),
ensuring the retrieved memory respects the user's input limit.

"title_case": false,
"type": "code",
"value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n memory_kwargs = {\n component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n }\n # filter out empty values\n memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n"
"value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove dead code, use proper logging, and honour the n_messages parameter

Inside the embedded AgentComponent:

  1. memory_inputs is still declared but no longer used after the dynamic-memory refactor.
  2. Two bare print() calls were added for debugging. These bypass the project‐wide logging facilities and will pollute stdout in production.
  3. self.n_messages was introduced in the UI schema yet never forwarded to MemoryComponent, so the new slider has no effect.

A small clean-up addresses all three:

-    memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+    # Dynamic memory inputs were removed; keep the codebase tidy.
+    # memory_inputs = [...]  (deleted – no longer required)

@@
-            self.chat_history = await self.get_memory_data()
-            print(self.chat_history)
-            logger.info(f"Chat history: {self.chat_history}")
+            self.chat_history = await self.get_memory_data()
+            logger.debug(f"Chat history: {self.chat_history}")
@@
-        print(f"Session ID: {self.graph.session_id}")
-        return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+        logger.debug(f"Session ID: {self.graph.session_id}")
+        return await (
+            MemoryComponent(**self.get_base_args())
+            .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+            .retrieve_messages()

Benefits: clearer code, consistent logging, and the UI control for message count actually works.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"value": "from langchain_core.tools import StructuredTool\n\nfrom langflow.base.agents.agent import LCToolsAgentComponent\nfrom langflow.base.agents.events import ExceptionWithMessageError\nfrom langflow.base.models.model_input_constants import (\n ALL_PROVIDER_FIELDS,\n MODEL_DYNAMIC_UPDATE_FIELDS,\n MODEL_PROVIDERS,\n MODEL_PROVIDERS_DICT,\n MODELS_METADATA,\n)\nfrom langflow.base.models.model_utils import get_model_name\nfrom langflow.components.helpers.current_date import CurrentDateComponent\nfrom langflow.components.helpers.memory import MemoryComponent\nfrom langflow.components.langchain_utilities.tool_calling import ToolCallingAgentComponent\nfrom langflow.custom.custom_component.component import _get_component_toolkit\nfrom langflow.custom.utils import update_component_build_config\nfrom langflow.field_typing import Tool\nfrom langflow.io import BoolInput, DropdownInput, MultilineInput, Output\nfrom langflow.logging import logger\nfrom langflow.schema.dotdict import dotdict\nfrom langflow.schema.message import Message\n\n\ndef set_advanced_true(component_input):\n component_input.advanced = True\n return component_input\n\n\nclass AgentComponent(ToolCallingAgentComponent):\n display_name: str = \"Agent\"\n description: str = \"Define the agent's instructions, then enter a task to complete using tools.\"\n icon = \"bot\"\n beta = False\n name = \"Agent\"\n\n memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]\n\n inputs = [\n DropdownInput(\n name=\"agent_llm\",\n display_name=\"Model Provider\",\n info=\"The provider of the language model that the agent will use to generate responses.\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"OpenAI\",\n real_time_refresh=True,\n input_types=[],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODEL_PROVIDERS)] + [{\"icon\": \"brain\"}],\n ),\n *MODEL_PROVIDERS_DICT[\"OpenAI\"][\"inputs\"],\n MultilineInput(\n name=\"system_prompt\",\n display_name=\"Agent Instructions\",\n info=\"System Prompt: Initial instructions and context provided to guide the agent's behavior.\",\n value=\"You are a helpful assistant that can use tools to answer questions and perform tasks.\",\n advanced=False,\n ),\n *LCToolsAgentComponent._base_inputs,\n # removed memory inputs from agent component\n # *memory_inputs,\n BoolInput(\n name=\"add_current_date_tool\",\n display_name=\"Current Date\",\n advanced=True,\n info=\"If true, will add a tool to the agent that returns the current date.\",\n value=True,\n ),\n ]\n outputs = [Output(name=\"response\", display_name=\"Response\", method=\"message_response\")]\n\n async def message_response(self) -> Message:\n try:\n # Get LLM model and validate\n llm_model, display_name = self.get_llm()\n if llm_model is None:\n msg = \"No language model selected. Please choose a model to proceed.\"\n raise ValueError(msg)\n self.model_name = get_model_name(llm_model, display_name=display_name)\n\n # Get memory data\n self.chat_history = await self.get_memory_data()\n print(self.chat_history)\n logger.info(f\"Chat history: {self.chat_history}\")\n\n # Add current date tool if enabled\n if self.add_current_date_tool:\n if not isinstance(self.tools, list): # type: ignore[has-type]\n self.tools = []\n current_date_tool = (await CurrentDateComponent(**self.get_base_args()).to_toolkit()).pop(0)\n if not isinstance(current_date_tool, StructuredTool):\n msg = \"CurrentDateComponent must be converted to a StructuredTool\"\n raise TypeError(msg)\n self.tools.append(current_date_tool)\n # note the tools are not required to run the agent, hence the validation removed.\n\n # Set up and run agent\n self.set(\n llm=llm_model,\n tools=self.tools or [],\n chat_history=self.chat_history,\n input_value=self.input_value,\n system_prompt=self.system_prompt,\n )\n agent = self.create_agent_runnable()\n return await self.run_agent(agent)\n\n except (ValueError, TypeError, KeyError) as e:\n logger.error(f\"{type(e).__name__}: {e!s}\")\n raise\n except ExceptionWithMessageError as e:\n logger.error(f\"ExceptionWithMessageError occurred: {e}\")\n raise\n except Exception as e:\n logger.error(f\"Unexpected error: {e!s}\")\n raise\n\n async def get_memory_data(self):\n # memory_kwargs = {\n # component_input.name: getattr(self, f\"{component_input.name}\") for component_input in self.memory_inputs\n # }\n # # filter out empty values\n # memory_kwargs = {k: v for k, v in memory_kwargs.items() if v is not None}\n\n # return await MemoryComponent(**self.get_base_args()).set(**memory_kwargs).retrieve_messages_as_text()\n print(f\"Session ID: {self.graph.session_id}\")\n return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()\n\n def get_llm(self):\n if not isinstance(self.agent_llm, str):\n return self.agent_llm, None\n\n try:\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if not provider_info:\n msg = f\"Invalid model provider: {self.agent_llm}\"\n raise ValueError(msg)\n\n component_class = provider_info.get(\"component_class\")\n display_name = component_class.display_name\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\", \"\")\n\n return self._build_llm_model(component_class, inputs, prefix), display_name\n\n except Exception as e:\n logger.error(f\"Error building {self.agent_llm} language model: {e!s}\")\n msg = f\"Failed to initialize language model: {e!s}\"\n raise ValueError(msg) from e\n\n def _build_llm_model(self, component, inputs, prefix=\"\"):\n model_kwargs = {}\n for input_ in inputs:\n if hasattr(self, f\"{prefix}{input_.name}\"):\n model_kwargs[input_.name] = getattr(self, f\"{prefix}{input_.name}\")\n return component.set(**model_kwargs).build_model()\n\n def set_component_params(self, component):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n inputs = provider_info.get(\"inputs\")\n prefix = provider_info.get(\"prefix\")\n model_kwargs = {input_.name: getattr(self, f\"{prefix}{input_.name}\") for input_ in inputs}\n\n return component.set(**model_kwargs)\n return component\n\n def delete_fields(self, build_config: dotdict, fields: dict | list[str]) -> None:\n \"\"\"Delete specified fields from build_config.\"\"\"\n for field in fields:\n build_config.pop(field, None)\n\n def update_input_types(self, build_config: dotdict) -> dotdict:\n \"\"\"Update input types for all fields in build_config.\"\"\"\n for key, value in build_config.items():\n if isinstance(value, dict):\n if value.get(\"input_types\") is None:\n build_config[key][\"input_types\"] = []\n elif hasattr(value, \"input_types\") and value.input_types is None:\n value.input_types = []\n return build_config\n\n async def update_build_config(\n self, build_config: dotdict, field_value: str, field_name: str | None = None\n ) -> dotdict:\n # Iterate over all providers in the MODEL_PROVIDERS_DICT\n # Existing logic for updating build_config\n if field_name in (\"agent_llm\",):\n build_config[\"agent_llm\"][\"value\"] = field_value\n provider_info = MODEL_PROVIDERS_DICT.get(field_value)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call the component class's update_build_config method\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n\n provider_configs: dict[str, tuple[dict, list[dict]]] = {\n provider: (\n MODEL_PROVIDERS_DICT[provider][\"fields\"],\n [\n MODEL_PROVIDERS_DICT[other_provider][\"fields\"]\n for other_provider in MODEL_PROVIDERS_DICT\n if other_provider != provider\n ],\n )\n for provider in MODEL_PROVIDERS_DICT\n }\n if field_value in provider_configs:\n fields_to_add, fields_to_delete = provider_configs[field_value]\n\n # Delete fields from other providers\n for fields in fields_to_delete:\n self.delete_fields(build_config, fields)\n\n # Add provider-specific fields\n if field_value == \"OpenAI\" and not any(field in build_config for field in fields_to_add):\n build_config.update(fields_to_add)\n else:\n build_config.update(fields_to_add)\n # Reset input types for agent_llm\n build_config[\"agent_llm\"][\"input_types\"] = []\n elif field_value == \"Custom\":\n # Delete all provider fields\n self.delete_fields(build_config, ALL_PROVIDER_FIELDS)\n # Update with custom component\n custom_component = DropdownInput(\n name=\"agent_llm\",\n display_name=\"Language Model\",\n options=[*sorted(MODEL_PROVIDERS), \"Custom\"],\n value=\"Custom\",\n real_time_refresh=True,\n input_types=[\"LanguageModel\"],\n options_metadata=[MODELS_METADATA[key] for key in sorted(MODELS_METADATA.keys())]\n + [{\"icon\": \"brain\"}],\n )\n build_config.update({\"agent_llm\": custom_component.to_dict()})\n # Update input types for all fields\n build_config = self.update_input_types(build_config)\n\n # Validate required keys\n default_keys = [\n \"code\",\n \"_type\",\n \"agent_llm\",\n \"tools\",\n \"input_value\",\n \"add_current_date_tool\",\n \"system_prompt\",\n \"agent_description\",\n \"max_iterations\",\n \"handle_parsing_errors\",\n \"verbose\",\n ]\n missing_keys = [key for key in default_keys if key not in build_config]\n if missing_keys:\n msg = f\"Missing required keys in build_config: {missing_keys}\"\n raise ValueError(msg)\n if (\n isinstance(self.agent_llm, str)\n and self.agent_llm in MODEL_PROVIDERS_DICT\n and field_name in MODEL_DYNAMIC_UPDATE_FIELDS\n ):\n provider_info = MODEL_PROVIDERS_DICT.get(self.agent_llm)\n if provider_info:\n component_class = provider_info.get(\"component_class\")\n component_class = self.set_component_params(component_class)\n prefix = provider_info.get(\"prefix\")\n if component_class and hasattr(component_class, \"update_build_config\"):\n # Call each component class's update_build_config method\n # remove the prefix from the field_name\n if isinstance(field_name, str) and isinstance(prefix, str):\n field_name = field_name.replace(prefix, \"\")\n build_config = await update_component_build_config(\n component_class, build_config, field_value, \"model_name\"\n )\n return dotdict({k: v.to_dict() if hasattr(v, \"to_dict\") else v for k, v in build_config.items()})\n\n async def _get_tools(self) -> list[Tool]:\n component_toolkit = _get_component_toolkit()\n tools_names = self._build_tools_names()\n agent_description = self.get_tool_description()\n # TODO: Agent Description Depreciated Feature to be removed\n description = f\"{agent_description}{tools_names}\"\n tools = component_toolkit(component=self).get_tools(\n tool_name=\"Call_Agent\", tool_description=description, callbacks=self.get_langchain_callbacks()\n )\n if hasattr(self, \"tools_metadata\"):\n tools = component_toolkit(component=self, metadata=self.tools_metadata).update_tools_metadata(tools=tools)\n return tools\n"
--- a/src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json
+++ b/src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json
@@ class AgentComponent(ToolCallingAgentComponent):
- memory_inputs = [set_advanced_true(component_input) for component_input in MemoryComponent().inputs]
+ # Dynamic memory inputs were removed; keep the codebase tidy.
+ # memory_inputs = [...] (deleted – no longer required)
@@ async def message_response(self) -> Message:
- self.chat_history = await self.get_memory_data()
- print(self.chat_history)
- logger.info(f"Chat history: {self.chat_history}")
+ self.chat_history = await self.get_memory_data()
+ logger.debug(f"Chat history: {self.chat_history}")
@@ async def get_memory_data(self):
- print(f"Session ID: {self.graph.session_id}")
- return await MemoryComponent(**self.get_base_args()).set(session_id=self.graph.session_id).retrieve_messages()
+ logger.debug(f"Session ID: {self.graph.session_id}")
+ return await (
+ MemoryComponent(**self.get_base_args())
+ .set(session_id=self.graph.session_id, n_messages=self.n_messages)
+ .retrieve_messages()
+ )
🤖 Prompt for AI Agents
In src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json
at line 1042 within the AgentComponent class, remove the unused memory_inputs
declaration to eliminate dead code, replace all print() statements with
appropriate logger calls to maintain consistent logging practices, and modify
the call to MemoryComponent in get_memory_data to pass self.n_messages so that
the UI slider controlling message count is respected and functional.

@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Jun 24, 2025
@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 24, 2025
@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 24, 2025
@edwinjosechittilappilly
Copy link
Collaborator Author

@coderabbitai Please review the PR Do not the JSONs

@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 24, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 24, 2025

Tip

For best results, initiate chat on the files or code changes.

@edwinjosechittilappilly, I'll review the Python files in this PR while skipping the JSON files as requested. This aligns with the preference that JSON files are autogenerated and don't require review.

Code Review

🔍 Overall Assessment

This PR implements a significant refactoring of agent and memory handling logic. The changes standardize memory retrieval using session IDs and improve chat history processing. The refactoring appears well-coordinated across multiple components.

📝 Key Changes Reviewed

1. Agent Base Class (src/backend/base/langflow/base/agents/agent.py)

Good: Enhanced chat_history handling with type-specific conversion logic

  • Differentiates between Data instances and Message lists
  • Proper conversion flow with to_data() method

2. Agent Component (src/backend/base/langflow/components/agents/agent.py)

Good: Major improvements to memory handling

  • Explicit MODEL_PROVIDERS_LIST replaces dynamic sorting
  • New n_messages input for message count control
  • Simplified memory retrieval using session ID

⚠️ Consideration: The removal of dynamic memory inputs is a breaking change for existing flows that depend on configurable memory parameters.

3. Memory Component (src/backend/base/langflow/components/helpers/memory.py)

Good: Enhanced message retrieval logic

  • Made "Number of Messages" input visible (show=True)
  • Improved slicing logic for ascending/descending order
  • Increased internal memory fetch limit to 10,000

🔍 Question: Debug print statements are still present - are these intentional for production?

4. Custom Component (src/backend/base/langflow/custom/custom_component/component.py)

Good: Session ID source standardization

  • Changed from self.session_id to self.graph.session_id
  • Ensures consistent session ID retrieval across components

🎯 Architecture Benefits

  1. Consistency: Standardized memory retrieval across all components
  2. Simplification: Removed complex dynamic memory input handling
  3. Control: Explicit message count control via n_messages
  4. Debugging: Added logging for chat history (helpful for troubleshooting)

🚨 Potential Concerns

  1. Breaking Changes: Existing flows with custom memory inputs may need updates
  2. Debug Code: Print statements in production code should be reviewed
  3. Performance: 10,000 message fetch limit - consider if this could impact performance with large chat histories

📊 Summary

This is a well-structured refactoring that improves the memory and chat history handling system. The changes are consistent and appear to solve real architectural issues with the previous dynamic input approach.

Recommendation: ✅ Approve - This refactoring improves the codebase architecture and standardizes memory handling patterns.


🧠 Learnings used
Learnt from: edwinjosechittilappilly
PR: langflow-ai/langflow#8504
File: src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json:391-393
Timestamp: 2025-06-12T15:25:01.072Z
Learning: The repository owner prefers CodeRabbit not to review or comment on JSON files because they are autogenerated.

@mfortman11 mfortman11 self-requested a review June 24, 2025 21:41
Copy link
Contributor

@ogabrielluiz ogabrielluiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

I'm not sure about the logger.info call, though. If an agent is being called in a production environment, we will see way too many of those logs.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jun 24, 2025
@edwinjosechittilappilly
Copy link
Collaborator Author

LGTM.

I'm not sure about the logger.info call, though. If an agent is being called in a production environment, we will see way too many of those logs.

Good Catch! I added those for debug! Let me chang it to logger.debug!

Copy link
Collaborator

@mfortman11 mfortman11 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM just a couple nits

input_dict["system_prompt"] = self.system_prompt
if hasattr(self, "chat_history") and self.chat_history:
input_dict["chat_history"] = data_to_messages(self.chat_history)
logger.info(f"Chat history: {self.chat_history}")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cleanup logs

Comment on lines +70 to +71
# removed memory inputs from agent component
# *memory_inputs,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are these needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to keep track since it is a major change. We can remove the comments if required.

@edwinjosechittilappilly edwinjosechittilappilly added lgtm This PR has been approved by a maintainer and removed lgtm This PR has been approved by a maintainer labels Jun 25, 2025
@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 25, 2025
@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 25, 2025
@github-actions github-actions bot added refactor Maintenance tasks and housekeeping and removed refactor Maintenance tasks and housekeeping labels Jun 25, 2025
@edwinjosechittilappilly edwinjosechittilappilly added this pull request to the merge queue Jun 25, 2025
Merged via the queue into main with commit 38d5885 Jun 25, 2025
66 checks passed
@edwinjosechittilappilly edwinjosechittilappilly deleted the fix-chat-history branch June 25, 2025 16:57
Yukiyukiyeah pushed a commit that referenced this pull request Jun 25, 2025
…logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>
2getsandesh pushed a commit to 2getsandesh/langflow-IBM that referenced this pull request Jun 26, 2025
…logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>
github-merge-queue bot pushed a commit that referenced this pull request Jun 27, 2025
…ity attribute (#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
lucaseduoli pushed a commit that referenced this pull request Jul 1, 2025
…ity attribute (#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
Khurdhula-Harshavardhan pushed a commit to JigsawStack/langflow that referenced this pull request Jul 1, 2025
…logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>
Khurdhula-Harshavardhan pushed a commit to JigsawStack/langflow that referenced this pull request Jul 1, 2025
…ity attribute (langflow-ai#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (langflow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
dev-thiago-oliver pushed a commit to vvidai/langflow that referenced this pull request Jul 5, 2025
…logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>
dev-thiago-oliver pushed a commit to vvidai/langflow that referenced this pull request Jul 5, 2025
…ity attribute (langflow-ai#8667)

* Update styleUtils.ts

* update to prompt component

* update to template

* update to mcp component

* update to smart function

* [autofix.ci] apply automated fixes

* update to templates

* fix sidebar

* change name

* update import

* update import

* update import

* [autofix.ci] apply automated fixes

* fix import

* fix ollama

* fix ruff

* refactor(agent): standardize memory handling and update chat history logic (langflow-ai#8715)

* update chat history

* update to agents

* Update Simple Agent.json

* update to templates

* ruff errors

* Update agent.py

* Update test_agent_component.py

* [autofix.ci] apply automated fixes

* update templates

* test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Mike Fortman <[email protected]>

* fix prompt change

* feat(message): support sequencing of multiple streamable models (langflow-ai#8434)

* feat: update OpenAI model parameters handling for reasoning models

* feat: extend input_value type in LCModelComponent to support AsyncIterator and Iterator

* refactor: remove assert_streaming_sequence method and related checks from Graph class

* feat: add consume_iterator method to Message class for handling iterators

* test: add unit tests for OpenAIModelComponent functionality and integration

* feat: update OpenAIModelComponent to include temperature and seed parameters in build_model method

* feat: rename consume_iterator method to consume_iterator_in_text and update its implementation for handling text

* feat: add is_connected_to_chat_output method to Component class for improved message handling

* feat: refactor LCModelComponent methods to support asynchronous message handling and improve chat output integration

* refactor: remove consume_iterator_in_text method from Message class and clean up LCModelComponent input handling

* fix: update import paths for input components in multiple starter project JSON files

* fix: enhance error message formatting in ErrorMessage class to handle additional exception attributes

* refactor: remove validate_stream calls from generate_flow_events and Graph class to streamline flow processing

* fix: handle asyncio.CancelledError in aadd_messagetables to ensure proper session rollback and retry logic

* refactor: streamline message handling in LCModelComponent by replacing async invocation with synchronous calls and updating message text handling

* refactor: enhance message handling in LCModelComponent by introducing lf_message for improved return value management and updating properties for consistency

* feat: add _build_source method to Component class for enhanced source handling and flexibility in source object management

* feat: enhance LCModelComponent by adding _handle_stream method for improved streaming response handling and refactoring chat output integration

* feat: update MemoryComponent to enhance message retrieval and storage functionality, including new sender type handling and output options for text and dataframe formats

* test: refactor LanguageModelComponent tests to use ComponentTestBaseWithoutClient and add tests for Google model creation and error handling

* test: add fixtures for API keys and implement live API tests for OpenAI, Anthropic, and Google models

* fix: reorder JSON properties for consistency in starter projects

* Updated JSON files for various starter projects to ensure consistent ordering of properties, specifically moving "type" to follow "selected_output" for better readability and maintainability.
* Affected files: Basic Prompt Chaining.json, Blog Writer.json, Financial Report Parser.json, Hybrid Search RAG.json, SEO Keyword Generator.json.

* refactor: simplify input_value type in LCModelComponent

* Updated the input_value parameter in LCModelComponent to remove AsyncIterator and Iterator types, streamlining the input options to only str and Message for improved clarity and maintainability.
* This change enhances the documentation and understanding of the expected input types for the component.

* fix: clarify comment for handling source in Component class

* refactor: remove unnecessary mocking in OpenAI model integration tests

* auto update

* update

* [autofix.ci] apply automated fixes

* fix openai import

* revert template changes

* test fixes

* update templates

* [autofix.ci] apply automated fixes

* fix tests

* fix order

* fix prompts import

* fix frontend tests

* fix frontend

* [autofix.ci] apply automated fixes

* add charmander

* [autofix.ci] apply automated fixes

* fix prompt frontend

* fix frontend

* test fix

* [autofix.ci] apply automated fixes

* change pokedex

* remove pokedex extra

* update template

* name fix

* update template

* mcp test fix

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: cristhianzl <[email protected]>
Co-authored-by: Yuqi Tang <[email protected]>
Co-authored-by: Mike Fortman <[email protected]>
Co-authored-by: Gabriel Luiz Freitas Almeida <[email protected]>
@coderabbitai coderabbitai bot mentioned this pull request Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

lgtm This PR has been approved by a maintainer refactor Maintenance tasks and housekeeping size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants