Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
57 commits
Select commit Hold shift + click to select a range
2c3fe7f
docs: update component links to individual pages (#10706)
mendonk Nov 25, 2025
168b84c
fix: avoid updating Message if ChatOutput is connected to ChatInput (…
keval718 Nov 25, 2025
0833dfb
Feat: Runflow optimization and improved dropdown behavior (#10720)
HzaRashid Nov 25, 2025
5040efc
fix: Add dynamic tool mode descriptions for agent integration (#10744)
Cristhianzl Nov 27, 2025
c46d31c
fix: Add profile picture management and API endpoints (#10763)
Cristhianzl Dec 1, 2025
2d735a7
deps: upgrade altk (#10804)
jordanrfrazier Dec 1, 2025
17aff85
fix: use running event loop to fix asyncio error when calling mcp too…
jordanrfrazier Dec 1, 2025
956baf6
fix: Improve file processing robustness and error feedback (#10781)
Cristhianzl Dec 1, 2025
4529e92
fix: resolve merge conflict (#10831)
keval718 Dec 1, 2025
e89bee0
fix: fixed warning on console for nested button (#10724) (#10832)
olayinkaadelakun Dec 1, 2025
f7a82f3
fix: fixed warning on console (#10745) (#10830)
olayinkaadelakun Dec 1, 2025
73120ad
fix: mask value to hide null field being returned (#10778) (#10829)
olayinkaadelakun Dec 1, 2025
e321e3e
Fix: Allow refresh list button to stay stagnant while zoom (Safari) (…
olayinkaadelakun Dec 1, 2025
423419e
feat: Add superuser support for running any user flow (#10808)
Cristhianzl Dec 1, 2025
a562670
Revert "feat: Add superuser support for running any user flow (#10808)"
Cristhianzl Dec 2, 2025
4df89f1
fix: Ollama models list in Agent component (#10814)
HimavarshaVS Dec 2, 2025
c66c679
Fix: Ensure Default Tab is Credential (#10779) (#10826)
olayinkaadelakun Dec 2, 2025
2a3d424
chore: update cuga version (#10737) (#10738)
jordanrfrazier Dec 2, 2025
731cc8b
chore: Remove DataFrameToToolsetComponent and related tests (#10845)
edwinjosechittilappilly Dec 3, 2025
3a2395b
fix: Handle GCP JSON parsing credentials (#10859)
erichare Dec 3, 2025
9be8720
fix: anthropic constants (#10862)
edwinjosechittilappilly Dec 3, 2025
5b09d60
fix: Add feature flag check to simplified_run_flow_session (#10863)
edwinjosechittilappilly Dec 3, 2025
97164d8
fix: Improve the debugging messages on startup (#10864)
erichare Dec 3, 2025
c22ebff
fix: Don't fail if doc column is missing (#10746) (#10872)
erichare Dec 3, 2025
7f5940e
add x-api-key auth option
Cristhianzl Dec 4, 2025
d04142b
fix(auth): Disallow refresh token access to API endpoints
mpawlow Dec 2, 2025
a1ce944
fix: Properly support the Batch Run component for watsonX models (#10…
erichare Dec 4, 2025
a9ef7fb
fix: Image upload for Gemini/Anthropic (#10880)
erichare Dec 4, 2025
05d5a1e
fix: Improve the default startup logging for readability (#10894)
erichare Dec 4, 2025
efcae53
Fix: lfx serve aysncio event loop error (#10888)
HzaRashid Dec 4, 2025
1174a6a
fix: Update LangflowCounts component to format star and Discord count…
viktoravelino Dec 4, 2025
99e73b6
Fix: update lfx serve tests to mock the .serve() to prevent hanging …
HzaRashid Dec 5, 2025
3beffea
Fix: lfx run agent _noopresult not iterable error (#10893)
HzaRashid Dec 5, 2025
f312f22
Fix: lfx run agent _noopresult not iterable error (#10911)
HzaRashid Dec 5, 2025
5ccd44e
fix: Add graceful subprocess cleanup during shutdown (#10906)
Cristhianzl Dec 5, 2025
a62b851
fix(workflows): include src/lfx/uv.lock in git add command to ensure …
Cristhianzl Dec 7, 2025
8245904
chore(nightly_build.yml): remove unnecessary directory change for lfx…
Cristhianzl Dec 7, 2025
44a9f70
chore(release_nightly): update build command to include --no-sources …
Cristhianzl Dec 7, 2025
5b7e332
chore(chat.py): remove unused future annotations import to clean up code
Cristhianzl Dec 7, 2025
3c1d0d2
fix(chat.py): add future annotations import for better type hinting s…
Cristhianzl Dec 7, 2025
6566275
chore: print version
Adam-Aghili Dec 8, 2025
22c9237
chore: use release_tag as version
Adam-Aghili Dec 8, 2025
5cffeae
fix: --prerelease=allow
Adam-Aghili Dec 8, 2025
6a0fd4d
fix: correctly raise file not found errors in File GET endpoints (#1…
jordanrfrazier Dec 8, 2025
2f157c9
fix: image pathing to operate with s3 storage (#10919) (#10929)
jordanrfrazier Dec 8, 2025
5550861
Feat: migrate MCP transport from SSE to streamable http (#10934)
HzaRashid Dec 8, 2025
5e54758
refactor(deps.py): reorganize imports for clarity and compliance with…
Cristhianzl Dec 8, 2025
af529b3
fix: update sidebar icon styles to maintain backward compatibility (#…
viktoravelino Dec 10, 2025
b6ed2bc
fix: Add empty input check in ALTKAgent for Anthropic (#10926)
jordanrfrazier Dec 10, 2025
f184989
fix: add condition to not make folder download fail when flow has Not…
lucaseduoli Dec 10, 2025
7b66dfb
fix: Enhance error handling for langchain-core version compatibility …
ogabrielluiz Dec 10, 2025
76af6b9
fix: Restrict message and session access to flow owners (#10973)
Cristhianzl Dec 11, 2025
c160933
Fix: lfx run with agent component throws '_NoopResult' object is not …
HzaRashid Dec 11, 2025
3200a2e
fix: Support tool mode for components that have no inputs (#10982)
erichare Dec 11, 2025
ac38023
fix: (Cherry Pick) default Ollama base url (#10981)
erichare Dec 11, 2025
7ba8c73
fix: Add authentication to various endpoints (#10977) (#10985)
erichare Dec 12, 2025
b17adfa
Fix: cuga integration (#10976)
sami-marreed Dec 12, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix: Add dynamic tool mode descriptions for agent integration (#10744)
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
  • Loading branch information
Cristhianzl and autofix-ci[bot] authored Nov 27, 2025
commit 5040efc9da1ece1909e28ca3d624a70af9f1113c

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@
"legacy": false,
"lf_version": "1.4.2",
"metadata": {
"code_hash": "dab7dd5bd32b",
"code_hash": "cae45e2d53f6",
"dependencies": {
"dependencies": [
{
Expand Down Expand Up @@ -551,7 +551,7 @@
"show": true,
"title_case": false,
"type": "code",
"value": "from collections.abc import Generator\nfrom typing import Any\n\nimport orjson\nfrom fastapi.encoders import jsonable_encoder\n\nfrom lfx.base.io.chat import ChatComponent\nfrom lfx.helpers.data import safe_convert\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, HandleInput, MessageTextInput\nfrom lfx.schema.data import Data\nfrom lfx.schema.dataframe import DataFrame\nfrom lfx.schema.message import Message\nfrom lfx.schema.properties import Source\nfrom lfx.template.field.base import Output\nfrom lfx.utils.constants import (\n MESSAGE_SENDER_AI,\n MESSAGE_SENDER_NAME_AI,\n MESSAGE_SENDER_USER,\n)\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n documentation: str = \"https://docs.langflow.org/chat-input-and-output\"\n icon = \"MessagesSquare\"\n name = \"ChatOutput\"\n minimized = True\n\n inputs = [\n HandleInput(\n name=\"input_value\",\n display_name=\"Inputs\",\n info=\"Message to be passed as output.\",\n input_types=[\"Data\", \"DataFrame\", \"Message\"],\n required=True,\n ),\n BoolInput(\n name=\"should_store_message\",\n display_name=\"Store Messages\",\n info=\"Store the message in the history.\",\n value=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[MESSAGE_SENDER_AI, MESSAGE_SENDER_USER],\n value=MESSAGE_SENDER_AI,\n advanced=True,\n info=\"Type of sender.\",\n ),\n MessageTextInput(\n name=\"sender_name\",\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=MESSAGE_SENDER_NAME_AI,\n advanced=True,\n ),\n MessageTextInput(\n name=\"session_id\",\n display_name=\"Session ID\",\n info=\"The session ID of the chat. If empty, the current session ID parameter will be used.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"context_id\",\n display_name=\"Context ID\",\n info=\"The context ID of the chat. Adds an extra layer to the local memory.\",\n value=\"\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"data_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n BoolInput(\n name=\"clean_data\",\n display_name=\"Basic Clean Data\",\n value=True,\n advanced=True,\n info=\"Whether to clean data before converting to string.\",\n ),\n ]\n outputs = [\n Output(\n display_name=\"Output Message\",\n name=\"message\",\n method=\"message_response\",\n ),\n ]\n\n def _build_source(self, id_: str | None, display_name: str | None, source: str | None) -> Source:\n source_dict = {}\n if id_:\n source_dict[\"id\"] = id_\n if display_name:\n source_dict[\"display_name\"] = display_name\n if source:\n # Handle case where source is a ChatOpenAI object\n if hasattr(source, \"model_name\"):\n source_dict[\"source\"] = source.model_name\n elif hasattr(source, \"model\"):\n source_dict[\"source\"] = str(source.model)\n else:\n source_dict[\"source\"] = str(source)\n return Source(**source_dict)\n\n async def message_response(self) -> Message:\n # First convert the input to string if needed\n text = self.convert_to_string()\n\n # Get source properties\n source, _, display_name, source_id = self.get_properties_from_source_component()\n\n # Create or use existing Message object\n if isinstance(self.input_value, Message) and not self.is_connected_to_chat_input():\n message = self.input_value\n # Update message properties\n message.text = text\n else:\n message = Message(text=text)\n\n # Set message properties\n message.sender = self.sender\n message.sender_name = self.sender_name\n message.session_id = self.session_id\n message.context_id = self.context_id\n message.flow_id = self.graph.flow_id if hasattr(self, \"graph\") else None\n message.properties.source = self._build_source(source_id, display_name, source)\n\n # Store message if needed\n if self.session_id and self.should_store_message:\n stored_message = await self.send_message(message)\n self.message.value = stored_message\n message = stored_message\n\n self.status = message\n return message\n\n def _serialize_data(self, data: Data) -> str:\n \"\"\"Serialize Data object to JSON string.\"\"\"\n # Convert data.data to JSON-serializable format\n serializable_data = jsonable_encoder(data.data)\n # Serialize with orjson, enabling pretty printing with indentation\n json_bytes = orjson.dumps(serializable_data, option=orjson.OPT_INDENT_2)\n # Convert bytes to string and wrap in Markdown code blocks\n return \"```json\\n\" + json_bytes.decode(\"utf-8\") + \"\\n```\"\n\n def _validate_input(self) -> None:\n \"\"\"Validate the input data and raise ValueError if invalid.\"\"\"\n if self.input_value is None:\n msg = \"Input data cannot be None\"\n raise ValueError(msg)\n if isinstance(self.input_value, list) and not all(\n isinstance(item, Message | Data | DataFrame | str) for item in self.input_value\n ):\n invalid_types = [\n type(item).__name__\n for item in self.input_value\n if not isinstance(item, Message | Data | DataFrame | str)\n ]\n msg = f\"Expected Data or DataFrame or Message or str, got {invalid_types}\"\n raise TypeError(msg)\n if not isinstance(\n self.input_value,\n Message | Data | DataFrame | str | list | Generator | type(None),\n ):\n type_name = type(self.input_value).__name__\n msg = f\"Expected Data or DataFrame or Message or str, Generator or None, got {type_name}\"\n raise TypeError(msg)\n\n def convert_to_string(self) -> str | Generator[Any, None, None]:\n \"\"\"Convert input data to string with proper error handling.\"\"\"\n self._validate_input()\n if isinstance(self.input_value, list):\n clean_data: bool = getattr(self, \"clean_data\", False)\n return \"\\n\".join([safe_convert(item, clean_data=clean_data) for item in self.input_value])\n if isinstance(self.input_value, Generator):\n return self.input_value\n return safe_convert(self.input_value)\n"
"value": "from collections.abc import Generator\nfrom typing import Any\n\nimport orjson\nfrom fastapi.encoders import jsonable_encoder\n\nfrom lfx.base.io.chat import ChatComponent\nfrom lfx.helpers.data import safe_convert\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, HandleInput, MessageTextInput\nfrom lfx.schema.data import Data\nfrom lfx.schema.dataframe import DataFrame\nfrom lfx.schema.message import Message\nfrom lfx.schema.properties import Source\nfrom lfx.template.field.base import Output\nfrom lfx.utils.constants import (\n MESSAGE_SENDER_AI,\n MESSAGE_SENDER_NAME_AI,\n MESSAGE_SENDER_USER,\n)\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n documentation: str = \"https://docs.langflow.org/chat-input-and-output\"\n icon = \"MessagesSquare\"\n name = \"ChatOutput\"\n minimized = True\n\n inputs = [\n HandleInput(\n name=\"input_value\",\n display_name=\"Inputs\",\n info=\"Message to be passed as output.\",\n input_types=[\"Data\", \"DataFrame\", \"Message\"],\n required=True,\n ),\n BoolInput(\n name=\"should_store_message\",\n display_name=\"Store Messages\",\n info=\"Store the message in the history.\",\n value=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[MESSAGE_SENDER_AI, MESSAGE_SENDER_USER],\n value=MESSAGE_SENDER_AI,\n advanced=True,\n info=\"Type of sender.\",\n ),\n MessageTextInput(\n name=\"sender_name\",\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=MESSAGE_SENDER_NAME_AI,\n advanced=True,\n ),\n MessageTextInput(\n name=\"session_id\",\n display_name=\"Session ID\",\n info=\"The session ID of the chat. If empty, the current session ID parameter will be used.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"context_id\",\n display_name=\"Context ID\",\n info=\"The context ID of the chat. Adds an extra layer to the local memory.\",\n value=\"\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"data_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n BoolInput(\n name=\"clean_data\",\n display_name=\"Basic Clean Data\",\n value=True,\n advanced=True,\n info=\"Whether to clean data before converting to string.\",\n ),\n ]\n outputs = [\n Output(\n display_name=\"Output Message\",\n name=\"message\",\n method=\"message_response\",\n ),\n ]\n\n def _build_source(self, id_: str | None, display_name: str | None, source: str | None) -> Source:\n source_dict = {}\n if id_:\n source_dict[\"id\"] = id_\n if display_name:\n source_dict[\"display_name\"] = display_name\n if source:\n # Handle case where source is a ChatOpenAI object\n if hasattr(source, \"model_name\"):\n source_dict[\"source\"] = source.model_name\n elif hasattr(source, \"model\"):\n source_dict[\"source\"] = str(source.model)\n else:\n source_dict[\"source\"] = str(source)\n return Source(**source_dict)\n\n async def message_response(self) -> Message:\n # First convert the input to string if needed\n text = self.convert_to_string()\n\n # Get source properties\n source, _, display_name, source_id = self.get_properties_from_source_component()\n\n # Create or use existing Message object\n if isinstance(self.input_value, Message) and not self.is_connected_to_chat_input():\n message = self.input_value\n # Update message properties\n message.text = text\n else:\n message = Message(text=text)\n\n # Set message properties\n message.sender = self.sender\n message.sender_name = self.sender_name\n message.session_id = self.session_id or self.graph.session_id or \"\"\n message.context_id = self.context_id\n message.flow_id = self.graph.flow_id if hasattr(self, \"graph\") else None\n message.properties.source = self._build_source(source_id, display_name, source)\n\n # Store message if needed\n if message.session_id and self.should_store_message:\n stored_message = await self.send_message(message)\n self.message.value = stored_message\n message = stored_message\n\n self.status = message\n return message\n\n def _serialize_data(self, data: Data) -> str:\n \"\"\"Serialize Data object to JSON string.\"\"\"\n # Convert data.data to JSON-serializable format\n serializable_data = jsonable_encoder(data.data)\n # Serialize with orjson, enabling pretty printing with indentation\n json_bytes = orjson.dumps(serializable_data, option=orjson.OPT_INDENT_2)\n # Convert bytes to string and wrap in Markdown code blocks\n return \"```json\\n\" + json_bytes.decode(\"utf-8\") + \"\\n```\"\n\n def _validate_input(self) -> None:\n \"\"\"Validate the input data and raise ValueError if invalid.\"\"\"\n if self.input_value is None:\n msg = \"Input data cannot be None\"\n raise ValueError(msg)\n if isinstance(self.input_value, list) and not all(\n isinstance(item, Message | Data | DataFrame | str) for item in self.input_value\n ):\n invalid_types = [\n type(item).__name__\n for item in self.input_value\n if not isinstance(item, Message | Data | DataFrame | str)\n ]\n msg = f\"Expected Data or DataFrame or Message or str, got {invalid_types}\"\n raise TypeError(msg)\n if not isinstance(\n self.input_value,\n Message | Data | DataFrame | str | list | Generator | type(None),\n ):\n type_name = type(self.input_value).__name__\n msg = f\"Expected Data or DataFrame or Message or str, Generator or None, got {type_name}\"\n raise TypeError(msg)\n\n def convert_to_string(self) -> str | Generator[Any, None, None]:\n \"\"\"Convert input data to string with proper error handling.\"\"\"\n self._validate_input()\n if isinstance(self.input_value, list):\n clean_data: bool = getattr(self, \"clean_data\", False)\n return \"\\n\".join([safe_convert(item, clean_data=clean_data) for item in self.input_value])\n if isinstance(self.input_value, Generator):\n return self.input_value\n return safe_convert(self.input_value)\n"
},
"context_id": {
"_input_type": "MessageTextInput",
Expand Down
Loading