Skip to content

Conversation

@kiran-kate
Copy link
Contributor

@kiran-kate kiran-kate commented Oct 10, 2025

This PR adds an agent that uses post-tool and pre-tool processing components from the Agent Lifecycle ToolKit.

  • The UI widget of this new agent (called ALTKAgent) exposes options to the user to turn the special processing on/off. Some more advanced parameters can be set as well.
  • The execution of the agent calls the ALTK components to enhance the tool calling capabilities of LLMs. This is done by performing special checks and processing around tool calls.

The current components enabled are:

  • Processing of tool outputs when they are large JSON objects (larger than "Response Processing Size Threshold" characters). LLMs need to process these tool outputs to extract useful information from them. This ALTK component prompts the LLM to generate Python code to parse the JSON and then runs that code to extract information from the tool response. More information about this component can be found here.

Summary by CodeRabbit

  • New Features
    • Added an ALTK-based agent component with configurable prompts, history, formatting, and output schema.
    • Introduced post-tool processing that interprets large JSON tool outputs to refine responses.
    • Added support for OpenAI and Anthropic providers, image content in inputs/history, session-aware messaging, and callback-driven tool reflection.
  • Tests
    • New unit tests validating configuration and end-to-end execution across multiple models/providers.
  • Chores
    • Added agent lifecycle toolkit dependency.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 10, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds a new ALTK-based agent component with post-tool processing, wires it into the agents package exports, introduces corresponding unit tests covering OpenAI and Anthropic models, and adds agent-lifecycle-toolkit as a dependency.

Changes

Cohort / File(s) Summary of edits
Dependency update
pyproject.toml
Added agent-lifecycle-toolkit to dependencies.
Agents package exports
src/lfx/src/lfx/components/agents/__init__.py
Added lazy import mapping and public export for ALTKAgentComponent.
New ALTK agent implementation
src/lfx/src/lfx/components/agents/altk_agent.py
Introduced set_advanced_true, PostToolCallbackHandler, and ALTKAgentComponent with configurable inputs, post-tool JSON handling, and agent orchestration.
Unit tests for ALTK agent
src/backend/tests/unit/components/agents/test_altk_agent.py
Added tests validating config building and execution across OpenAI/Anthropic models, including tool use via Calculator and error aggregation.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant ALTK as ALTKAgentComponent
  participant Agent as AgentExecutor/Runnable
  participant Tool as Tool
  participant CB as PostToolCallbackHandler
  participant LLM as OpenAI/Anthropic Client
  participant CodeGen as CodeGenerationComponent

  User->>ALTK: Provide inputs (prompt, model, options, history)
  ALTK->>Agent: Build and run agent
  Agent->>Tool: Invoke tool with args
  Tool-->>Agent: Tool output
  Agent->>CB: on_tool_end(tool_output)
  CB->>CB: Normalize output / try JSON decode
  alt Large JSON output
    CB->>LLM: Select client (OpenAI/Anthropic)
    CB->>CodeGen: Instantiate with LLM
    CodeGen->>LLM: Generate from JSON context
    LLM-->>CodeGen: Generated result
    CodeGen-->>CB: Result text
    CB-->>Agent: Post-processed text
  else Small or non-JSON
    CB-->>Agent: Original tool output
  end
  Agent-->>ALTK: Final agent response
  ALTK-->>User: Message result
  note over CB,LLM: Colored: New/changed post-tool processing path
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Suggested labels

enhancement

Suggested reviewers

  • edwinjosechittilappilly
  • ogabrielluiz
  • jordanrfrazier

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 3 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.88% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Test Coverage For New Implementations ❓ Inconclusive I inspected the PR’s changes and the repository for corresponding tests. A new backend unit test module exists at src/backend/tests/unit/components/agents/test_altk_agent.py, following the naming convention, and it exercises the new ALTKAgentComponent: it validates config/build_config behavior for OpenAI/Anthropic and runs the agent end-to-end with a Calculator tool across multiple model families, asserting outputs, which are substantive tests rather than placeholders. There are no frontend changes requiring *.test.ts, and this PR is a new feature rather than a bug fix, so regression tests are not applicable. While unit coverage is present and some client-based runs approximate integration behavior, there is no explicit integration test covering the critical ALTK post-tool large-JSON code-generation path; thus, integration coverage for the new security-sensitive flow appears missing. Overall, unit tests exist and are meaningful, but integration tests for the new code-generation processing would strengthen coverage. Add an integration-style test that exercises the large JSON response path invoking CodeGenerationComponent (ideally with the sandbox enabled and the component mocked or contained), asserting that the processed output is applied or at least emitted as expected; alternatively, a focused unit test can mock CodeGenerationComponent and PostToolCallbackHandler to validate JSON parsing, size-threshold gating, and client selection. Keep the existing unit tests, add slow markers where noted, and ensure the new tests run under CI with appropriate API keys or mocks.
Test Quality And Coverage ❓ Inconclusive Based on the provided summaries and a review of the test intentions, the new tests exercise basic configuration of the ALTKAgentComponent and validate simple tool execution across OpenAI and Anthropic models using async pytest patterns. However, they largely assert the presence of "4" in responses, which is a smoke-level check and does not validate the core ALTK feature introduced (post-tool processing of large JSON with code generation and sandboxing), nor error paths or fallback behavior. While pytest is used correctly for backend async tests, comprehensive behavior coverage—especially around the post-tool JSON threshold logic, parsing robustness, and callback integration—is missing, and there are open review comments indicating needed hardening for None handling and marking slow/flaky suites. No API endpoints are introduced here, so success/error API tests are not applicable. Add focused tests that explicitly trigger the large-JSON post-tool path: simulate tool outputs just below and above the threshold, verify that the code generation component is invoked (mock it) and that sandbox flags are respected; include tests for malformed JSON and non-JSON inputs to ensure graceful fallback. Harden existing assertions to coerce response text to string and mark all-model sweeps as @pytest.mark.slow or reduce to a curated subset. Include tests for chat_history handling branches (single Data vs list of Message) and user_query extraction from content blocks to validate the callback wiring. Once these are added and the noted hardening changes are applied, this check can pass.
Test File Naming And Structure ❓ Inconclusive I inspected the repository for test file patterns and specifically reviewed the newly added backend test file. The backend test file is correctly named as src/backend/tests/unit/components/agents/test_altk_agent.py and uses pytest-style classes and async test_ functions, with appropriate pytest markers (e.g., api_key_required, no_blockbuster) and some setup via mocked LLMs; function and class names are reasonably descriptive. However, I could not verify the presence or structure of frontend Playwright tests (*.test.ts/tsx) or the placement/marking of integration tests from the provided context, and the new tests are heavily positive-path oriented with limited explicit negative/error condition coverage and no clear teardown beyond default pytest behavior. Given these gaps and the broader scope of the check across backend, frontend, and integration suites, there isn’t enough information to conclusively assert compliance project-wide. Please confirm whether frontend tests exist and follow the *.test.ts/tsx Playwright convention, and whether integration tests are placed in an appropriate directory and clearly marked; if absent, note that they are intentionally omitted. For the new backend tests, add explicit negative/error-path cases (e.g., missing/None response text, model error handling) and minimal teardown or fixtures where side effects could occur; also mark slow/long-running API tests accordingly and consider narrowing sweeping model loops to a curated subset in unit tests. Once these items are clarified or adjusted, we can re-run the check and update the status.
✅ Passed checks (3 passed)
Check name Status Explanation
Excessive Mock Usage Warning ✅ Passed I inspected the added/modified test files, focusing on src/backend/tests/unit/components/agents/test_altk_agent.py. The tests primarily construct ALTKAgentComponent with a MockLanguageModel (a purpose-built test double in this repo) and real Calculator tool; they do not rely on unittest.mock.patch, MagicMock, or heavy mocking of core logic. Mock usage is limited and appropriate to isolate external LLM providers, while behavior of the agent, tools, and configuration is exercised with real components. Therefore, there is no evidence of excessive mock usage obscuring behavior, and integration-like tests are already present for multiple real model configurations (albeit guarded by API-key markers).
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly captures the addition of an ALTK-based agent component, accurately reflecting the primary feature introduced by the pull request without including extraneous detail. It clearly signals to reviewers that a new agent component leveraging ALTK is being added.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (2)
src/backend/tests/unit/components/agents/test_altk_agent.py (2)

55-91: Add test docstrings and strengthen assertions.

Per backend test guidelines, each test should have a docstring. Also OK to keep, but consider adding one-liners for clarity. The rest of the checks look good.

-    async def test_build_config_update(self, component_class, default_kwargs):
+    async def test_build_config_update(self, component_class, default_kwargs):
+        """Build config should populate provider and model fields for OpenAI and Anthropic."""

As per coding guidelines


154-189: Anthropic sweep: mark slow and keep robust text handling.

Add slow marker and keep the safe text extraction.

-    @pytest.mark.api_key_required
-    @pytest.mark.no_blockbuster
+    @pytest.mark.api_key_required
+    @pytest.mark.no_blockbuster
+    @pytest.mark.slow
     async def test_agent_component_with_all_anthropic_models(self):

As per coding guidelines

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f24a064 and 009ce78.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • pyproject.toml (1 hunks)
  • src/backend/tests/unit/components/agents/test_altk_agent.py (1 hunks)
  • src/lfx/src/lfx/components/agents/__init__.py (1 hunks)
  • src/lfx/src/lfx/components/agents/altk_agent.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
src/backend/tests/unit/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

src/backend/tests/unit/components/**/*.py: Mirror the component directory structure for unit tests in src/backend/tests/unit/components/
Use ComponentTestBaseWithClient or ComponentTestBaseWithoutClient as base classes for component unit tests
Provide file_names_mapping for backward compatibility in component tests
Create comprehensive unit tests for all new components

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
{src/backend/**/*.py,tests/**/*.py,Makefile}

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
src/backend/tests/unit/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)

Test component integration within flows using create_flow, build_flow, and get_build_events utilities

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
src/backend/tests/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/testing.mdc)

src/backend/tests/**/*.py: Unit tests for backend code must be located in the 'src/backend/tests/' directory, with component tests organized by component subdirectory under 'src/backend/tests/unit/components/'.
Test files should use the same filename as the component under test, with an appropriate test prefix or suffix (e.g., 'my_component.py' → 'test_my_component.py').
Use the 'client' fixture (an async httpx.AsyncClient) for API tests in backend Python tests, as defined in 'src/backend/tests/conftest.py'.
When writing component tests, inherit from the appropriate base class in 'src/backend/tests/base.py' (ComponentTestBase, ComponentTestBaseWithClient, or ComponentTestBaseWithoutClient) and provide the required fixtures: 'component_class', 'default_kwargs', and 'file_names_mapping'.
Each test in backend Python test files should have a clear docstring explaining its purpose, and complex setups or mocks should be well-commented.
Test both sync and async code paths in backend Python tests, using '@pytest.mark.asyncio' for async tests.
Mock external dependencies appropriately in backend Python tests to isolate unit tests from external services.
Test error handling and edge cases in backend Python tests, including using 'pytest.raises' and asserting error messages.
Validate input/output behavior and test component initialization and configuration in backend Python tests.
Use the 'no_blockbuster' pytest marker to skip the blockbuster plugin in tests when necessary.
Be aware of ContextVar propagation in async tests; test both direct event loop execution and 'asyncio.to_thread' scenarios to ensure proper context isolation.
Test error handling by mocking internal functions using monkeypatch in backend Python tests.
Test resource cleanup in backend Python tests by using fixtures that ensure proper initialization and cleanup of resources.
Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Test Langflow's Messag...

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
src/backend/**/components/**/*.py

📄 CodeRabbit inference engine (.cursor/rules/icons.mdc)

In your Python component class, set the icon attribute to a string matching the frontend icon mapping exactly (case-sensitive).

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
**/{test_*.py,*.test.ts,*.test.tsx}

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/{test_*.py,*.test.ts,*.test.tsx}: Review tests for excessive numbers of mocks that obscure the behavior under test
Warn when mocks replace real behavior/interactions that should be exercised by tests
Suggest using real objects or lightweight test doubles when mocks become excessive
Ensure mocks are reserved for external dependencies, not core domain logic
Recommend integration tests when unit tests rely heavily on mocks
Check that test files follow project naming conventions (backend: test_*.py; frontend: *.test.ts/tsx)
Verify tests actually exercise the new functionality (avoid placeholder tests)
Test files should use descriptive test names that explain the behavior under test
Organize tests logically with proper setup and teardown
Include edge cases and error conditions for comprehensive coverage
Cover both positive and negative scenarios where appropriate
Tests should cover the main functionality being implemented
Avoid smoke-only tests; assert concrete behaviors and outcomes
Follow project testing tools: pytest for backend, Playwright for frontend
For API endpoints, include tests for both success and error responses

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
**/test_*.py

📄 CodeRabbit inference engine (coderabbit-custom-pre-merge-checks-unique-id-file-non-traceable-F7F2B60C-1728-4C9A-8889-4F2235E186CA.txt)

**/test_*.py: Backend tests must use pytest structure with files named test_*.py
For async Python functions, use proper async testing patterns with pytest

Files:

  • src/backend/tests/unit/components/agents/test_altk_agent.py
🧬 Code graph analysis (3)
src/backend/tests/unit/components/agents/test_altk_agent.py (3)
src/lfx/src/lfx/components/agents/altk_agent.py (1)
  • ALTKAgentComponent (135-385)
src/backend/tests/base.py (2)
  • ComponentTestBaseWithClient (162-163)
  • ComponentTestBaseWithoutClient (166-167)
src/lfx/src/lfx/custom/custom_component/component.py (2)
  • _should_process_output (1137-1145)
  • to_frontend_node (954-1006)
src/lfx/src/lfx/components/agents/__init__.py (1)
src/lfx/src/lfx/components/agents/altk_agent.py (1)
  • ALTKAgentComponent (135-385)
src/lfx/src/lfx/components/agents/altk_agent.py (4)
src/backend/base/langflow/memory.py (2)
  • messages (310-314)
  • delete_message (224-234)
src/lfx/src/lfx/base/agents/events.py (2)
  • ExceptionWithMessageError (16-27)
  • process_agent_events (329-362)
src/lfx/src/lfx/base/agents/utils.py (1)
  • data_to_messages (43-52)
src/lfx/src/lfx/schema/data.py (1)
  • Data (26-288)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Lint Backend / Run Mypy (3.13)
  • GitHub Check: Lint Backend / Run Mypy (3.12)
  • GitHub Check: Lint Backend / Run Mypy (3.10)
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Test Starter Templates
  • GitHub Check: Update Starter Projects
🔇 Additional comments (4)
src/lfx/src/lfx/components/agents/__init__.py (1)

12-15: Wiring looks correct.

Dynamic import map and all updated to include ALTKAgentComponent. This enables lazy import and proper public exposure.

src/lfx/src/lfx/components/agents/altk_agent.py (1)

146-149: Verify icon mapping string.

Frontend mapping is case‑sensitive. You set icon = "bot" but later use "Bot" in message properties. Ensure icon matches the frontend mapping exactly.

As per coding guidelines

src/backend/tests/unit/components/agents/test_altk_agent.py (1)

28-35: file_names_mapping is empty; confirm if mapping is required for backcompat.

Guidelines suggest providing file_names_mapping for backward compatibility across versions. If this component needs it, add mappings; otherwise, add a short comment explaining why it’s empty.

As per coding guidelines

pyproject.toml (1)

137-137: Pin agent-lifecycle-toolkit to a safe range and verify altk import

-    "agent-lifecycle-toolkit",
+    "agent-lifecycle-toolkit>=0.2.1.10062025,<1.0.0",

After updating, run:

python -c "import altk; print(altk)"

to confirm the package exposes a top-level altk module.

Comment on lines 120 to 132
config = CodeGenerationComponentConfig(llm_client=llm_client_obj, use_docker_sandbox=False)

middleware = CodeGenerationComponent(config=config)
nl_query = self.user_query
input_data = CodeGenerationRunInput(messages=[], nl_query=nl_query, tool_response=tool_response_json)
output = None
try:
output = middleware.process(input_data, AgentPhase.RUNTIME)
except Exception as e: # noqa: BLE001
logger.error(f"Exception in executing CodeGenerationComponent: {e}")
logger.info(f"Output of CodeGenerationComponent: {output.result}")
return output.result
return tool_response
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: executing LLM‑generated code without sandboxing (RCE risk).

use_docker_sandbox=False runs arbitrary generated Python code on the host. This is a severe security risk.

Apply at least:

-                config = CodeGenerationComponentConfig(llm_client=llm_client_obj, use_docker_sandbox=False)
+                config = CodeGenerationComponentConfig(llm_client=llm_client_obj, use_docker_sandbox=True)

Additionally:

  • Run in an isolated container/user with tight AppArmor/SELinux profile.
  • Limit network/filesystem access and set CPU/memory/time quotas.
  • Log and redact inputs to avoid secrets leakage. Based on learnings
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
config = CodeGenerationComponentConfig(llm_client=llm_client_obj, use_docker_sandbox=False)
middleware = CodeGenerationComponent(config=config)
nl_query = self.user_query
input_data = CodeGenerationRunInput(messages=[], nl_query=nl_query, tool_response=tool_response_json)
output = None
try:
output = middleware.process(input_data, AgentPhase.RUNTIME)
except Exception as e: # noqa: BLE001
logger.error(f"Exception in executing CodeGenerationComponent: {e}")
logger.info(f"Output of CodeGenerationComponent: {output.result}")
return output.result
return tool_response
config = CodeGenerationComponentConfig(llm_client=llm_client_obj, use_docker_sandbox=True)
middleware = CodeGenerationComponent(config=config)
nl_query = self.user_query
input_data = CodeGenerationRunInput(messages=[], nl_query=nl_query, tool_response=tool_response_json)
output = None
try:
output = middleware.process(input_data, AgentPhase.RUNTIME)
except Exception as e: # noqa: BLE001
logger.error(f"Exception in executing CodeGenerationComponent: {e}")
logger.info(f"Output of CodeGenerationComponent: {output.result}")
return output.result
return tool_response

Comment on lines 318 to 321
if isinstance(self.chat_history, Data):
input_dict["chat_history"] = data_to_messages(self.chat_history)
if all(isinstance(m, Message) for m in self.chat_history):
input_dict["chat_history"] = data_to_messages([m.to_data() for m in self.chat_history])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix Data chat_history handling and avoid overwrite.

Passing a single Data to data_to_messages breaks (expects a list). Also the second condition should be elif.

-            if isinstance(self.chat_history, Data):
-                input_dict["chat_history"] = data_to_messages(self.chat_history)
-            if all(isinstance(m, Message) for m in self.chat_history):
+            if isinstance(self.chat_history, Data):
+                input_dict["chat_history"] = data_to_messages([self.chat_history])
+            elif all(isinstance(m, Message) for m in self.chat_history):
                 input_dict["chat_history"] = data_to_messages([m.to_data() for m in self.chat_history])
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if isinstance(self.chat_history, Data):
input_dict["chat_history"] = data_to_messages(self.chat_history)
if all(isinstance(m, Message) for m in self.chat_history):
input_dict["chat_history"] = data_to_messages([m.to_data() for m in self.chat_history])
if isinstance(self.chat_history, Data):
input_dict["chat_history"] = data_to_messages([self.chat_history])
elif all(isinstance(m, Message) for m in self.chat_history):
input_dict["chat_history"] = data_to_messages([m.to_data() for m in self.chat_history])
🤖 Prompt for AI Agents
In src/lfx/src/lfx/components/agents/altk_agent.py around lines 318 to 321, the
chat_history handling currently calls data_to_messages with a single Data (which
expects a list) and the second check can overwrite the first; update the first
branch to pass a list (e.g., data_to_messages([self.chat_history])) and change
the second if to elif so only one branch runs, ensuring chat_history is
converted correctly without being overwritten.

Comment on lines 350 to 361
callbacks_to_be_used = [AgentAsyncHandler(self.log), *self.get_langchain_callbacks()]
if self.enable_post_tool_reflection:
if hasattr(input_dict["input"], "content"):
callbacks_to_be_used.append(
PostToolCallbackHandler(
input_dict["input"].content, agent, self.response_processing_size_threshold
)
)
elif isinstance(input_dict["input"], str):
callbacks_to_be_used.append(
PostToolCallbackHandler(input_dict["input"], agent, self.response_processing_size_threshold)
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

user_query type mismatch when passing Message.content.

input.content is a list of content blocks; PostToolCallbackHandler expects a string. Extract the text content.

-                if hasattr(input_dict["input"], "content"):
-                    callbacks_to_be_used.append(
-                        PostToolCallbackHandler(
-                            input_dict["input"].content, agent, self.response_processing_size_threshold
-                        )
-                    )
+                if hasattr(input_dict["input"], "content"):
+                    content_list = input_dict["input"].content
+                    user_query_str = next(
+                        (item.get("text") for item in content_list if isinstance(item, dict) and item.get("type") == "text" and "text" in item),
+                        str(content_list),
+                    )
+                    callbacks_to_be_used.append(
+                        PostToolCallbackHandler(user_query_str, agent, self.response_processing_size_threshold)
+                    )

Note: Returning a value from on_tool_end won’t alter tool output. Consider wrapping tools to post‑process their outputs instead. As per coding guidelines

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
callbacks_to_be_used = [AgentAsyncHandler(self.log), *self.get_langchain_callbacks()]
if self.enable_post_tool_reflection:
if hasattr(input_dict["input"], "content"):
callbacks_to_be_used.append(
PostToolCallbackHandler(
input_dict["input"].content, agent, self.response_processing_size_threshold
)
)
elif isinstance(input_dict["input"], str):
callbacks_to_be_used.append(
PostToolCallbackHandler(input_dict["input"], agent, self.response_processing_size_threshold)
)
callbacks_to_be_used = [AgentAsyncHandler(self.log), *self.get_langchain_callbacks()]
if self.enable_post_tool_reflection:
if hasattr(input_dict["input"], "content"):
content_list = input_dict["input"].content
user_query_str = next(
(item.get("text")
for item in content_list
if isinstance(item, dict)
and item.get("type") == "text"
and "text" in item),
str(content_list),
)
callbacks_to_be_used.append(
PostToolCallbackHandler(
user_query_str, agent, self.response_processing_size_threshold
)
)
elif isinstance(input_dict["input"], str):
callbacks_to_be_used.append(
PostToolCallbackHandler(
input_dict["input"], agent, self.response_processing_size_threshold
)
)
🤖 Prompt for AI Agents
In src/lfx/src/lfx/components/agents/altk_agent.py around lines 350-361, the
code passes Message.content (a list of content blocks) directly to
PostToolCallbackHandler which expects a string; change the logic to extract and
join the textual parts into a single string before creating
PostToolCallbackHandler (e.g., map/filter content blocks to their text and join
with spaces or newlines), keep the existing branch for string inputs, and ensure
the argument passed to PostToolCallbackHandler is always a string; if you need
to alter tool outputs instead, follow the guideline to wrap tools for
post-processing rather than relying on on_tool_end return values.

@kiran-kate kiran-kate changed the title Adding an Agent component that uses components from ALTK. feat: Adding an Agent component that uses components from ALTK. Oct 10, 2025
@github-actions github-actions bot added the enhancement New feature or request label Oct 10, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 12, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 13, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 13, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 13, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 13, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 13, 2025
@github-actions github-actions bot removed the enhancement New feature or request label Oct 20, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 24, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 24, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 24, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 24, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 24, 2025
@kerinin kerinin enabled auto-merge October 28, 2025 13:59
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 28, 2025
@github-actions github-actions bot added enhancement New feature or request and removed enhancement New feature or request labels Oct 28, 2025
@kerinin kerinin added this pull request to the merge queue Oct 28, 2025
Merged via the queue into langflow-ai:main with commit a83ab72 Oct 28, 2025
76 checks passed
@mendonk mendonk mentioned this pull request Nov 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants