Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
2f6f6cd
refactor: Use customization to get api base urls (#10871)
mfortman11 Dec 4, 2025
e948d5d
fix: Clean up the default startup logging (#10842)
erichare Dec 4, 2025
b4a54d8
refactor: add code sample customizations (#10884)
mfortman11 Dec 4, 2025
8aec1f3
Fix: lfx serve asyncio event loop error (#10887)
HzaRashid Dec 4, 2025
6cedc28
fix: Update LangflowCounts component to format star and Discord count…
viktoravelino Dec 4, 2025
d5a3124
Add test
jordanrfrazier Dec 5, 2025
e77d924
Revert "Add test"
jordanrfrazier Dec 5, 2025
7a053bd
Fix: update lfx serve tests to mock the .serve() to prevent hanging (…
HzaRashid Dec 5, 2025
7fc4208
fix: correctly raise file not found errors in File GET endpoints (#10…
jordanrfrazier Dec 8, 2025
c0d33c3
fix: image pathing to operate with s3 storage (#10919)
jordanrfrazier Dec 8, 2025
9a7ce34
fix: Add empty input check in ALTKAgent for Anthropic (#10913)
jsntsay Dec 8, 2025
9ed8321
fix: Composio's Freshdesk description error (#10760)
Uday-sidagana Dec 8, 2025
101cbd0
feat: updated Composio Github icon (#10764)
Uday-sidagana Dec 8, 2025
b0bae86
Feat: migrate MCP transport from SSE to streamable http (#10727)
HzaRashid Dec 8, 2025
a209acd
feat: Add a Unified Model Providers configuration (#10565)
HimavarshaVS Dec 9, 2025
12d17dc
fix(ci): Allow prerelease packages and fix runtime type imports (#10945)
Cristhianzl Dec 10, 2025
34216b2
fix: update sidebar icon styles to maintain backward compatibility (#…
viktoravelino Dec 10, 2025
840ec0f
feat: add tenacity retry in openserach (#10917)
edwinjosechittilappilly Dec 10, 2025
f3c08db
fix: Properly set a default Ollama base url (#10940)
erichare Dec 10, 2025
63141c1
feat: Ability to add custom colors for sticky notes. (#10961)
deon-sanchez Dec 10, 2025
2f9136c
fix: Support tool mode in components without inputs (#10959)
erichare Dec 11, 2025
397deff
fix: Proper MCP / Oauth support (#10965)
erichare Dec 11, 2025
70432fe
fix: prevent UI from getting stuck when switching to cURL mode after …
andifilhohub Dec 11, 2025
9463c4a
fix: Ensure dict return for MCP component (#10960)
erichare Dec 11, 2025
bce16ca
feat: Add Correlation ID for CHAIN & LLM Traces in ArizePhoenixTracer…
ialisaleh Dec 11, 2025
9d57aa8
bug: Fix watsonX embedding model selection (#10980)
deon-sanchez Dec 11, 2025
3fed9fe
fix: Add authentication to various endpoints (#10977)
erichare Dec 12, 2025
e9a537b
Fix: cuga integration (#10976)
sami-marreed Dec 12, 2025
24dbc60
Fix: ensure streamable-http session manager is entered and exited fro…
HzaRashid Dec 12, 2025
7ebccb8
Fix: Flowing edge for Boilerplate nodes with default width and height…
olayinkaadelakun Dec 12, 2025
3a41259
feat: Handle error events in OpenAI response streaming (#10844)
edwinjosechittilappilly Dec 12, 2025
f5e68c2
fix: Add graceful subprocess cleanup during shutdown (#10909)
Cristhianzl Dec 12, 2025
a54e508
fix: ruff errors in test openai respons api tests (#11005)
edwinjosechittilappilly Dec 12, 2025
d4ada6b
fix: mcp-proxy process leak (#10988)
phact Dec 12, 2025
757ef6c
feat: Add Hook for Auto Refresh of Model Provider Input (#10996)
deon-sanchez Dec 12, 2025
07a01ad
fix: Disable Local storage option in Write File component for cloud e…
HimavarshaVS Dec 12, 2025
7627660
Fix: disable mcp sse endpoints astra (#11006)
HzaRashid Dec 13, 2025
909bdb1
refactor: Improve image path extraction and validation (#11001)
Cristhianzl Dec 14, 2025
2a5ed55
Fix: Remove hidden from LCAgentComponent (#10984)
olayinkaadelakun Dec 15, 2025
53015c1
fix: cuga update (#11019)
sami-marreed Dec 15, 2025
056a76a
Fix: improve exception handling and status code for disabled endpoint…
HzaRashid Dec 15, 2025
c1c930b
fix: langwatch traces all api endpoints (#11013)
HzaRashid Dec 15, 2025
c970f99
docs: OpenAPI spec content updated without version change (#11032)
github-actions[bot] Dec 16, 2025
ae70c61
fix: Make sure loop inputs are properly handled in research (#11029)
erichare Dec 16, 2025
5ba7fe9
feat: add sliding container infrastructure to playgroundComponent
Nov 30, 2025
2191047
style: add border-radius to flow canvas and update Share button styling
Nov 30, 2025
7766290
fix: add sessionStorage fallback and update types (any -> unknown)
Nov 30, 2025
d4a3b30
feat: add chat-header feature to sliding container
Nov 30, 2025
b192fee
refactor: simplify FlowPageSlidingContainerContent by extracting sess…
Nov 30, 2025
b0b1bed
fix: restore session rename persistence logic from slide-chat-header …
Nov 30, 2025
0c96bbf
chore: revert formatting-only files to main (moved to infrastructure …
Nov 30, 2025
e108cc5
chore: sync formatting files with infrastructure branch
Nov 30, 2025
f956db3
rebase conflicts
Nov 30, 2025
524c54a
removed unecessary comment
Dec 1, 2025
279645b
fix: update messages store when renaming session to prevent content s…
Dec 1, 2025
d6a576f
refactor(ui): remove Sessions icon and adjust padding when playground…
Dec 3, 2025
db147e2
test: add ChatSidebar component tests
Dec 3, 2025
1c56d42
Add animated sidebar transition in playground chat
Dec 9, 2025
7db3329
Revert "Add animated sidebar transition in playground chat"
Dec 9, 2025
72216b2
Restore animated-close and align chat header with mini
Dec 15, 2025
9dd56dd
changes to fix animation using AnimatedConditional
Dec 16, 2025
05fb733
replace hooks
Dec 16, 2025
0704419
remove redundant hook and use use-add-session etc intead
Dec 16, 2025
6989beb
fix(auth): Disallow refresh token access to API endpoints (#10840)
mpawlow Dec 3, 2025
1a66d8a
changes to fix animation using AnimatedConditional
Dec 16, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
changes to fix animation using AnimatedConditional
  • Loading branch information
Olfa Maslah authored and Olfa Maslah committed Dec 16, 2025
commit 1a66d8ace2c2c88f774d3b500d8e3b17259b4e58
2 changes: 1 addition & 1 deletion .secrets.baseline
Original file line number Diff line number Diff line change
Expand Up @@ -1538,5 +1538,5 @@
}
]
},
"generated_at": "2025-12-02T04:40:43Z"
"generated_at": "2025-12-16T12:44:21Z"
}

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@
"legacy": false,
"lf_version": "1.4.2",
"metadata": {
"code_hash": "8c87e536cca4",
"code_hash": "76077de62755",
"dependencies": {
"dependencies": [
{
Expand All @@ -567,7 +567,7 @@
},
{
"name": "fastapi",
"version": "0.123.0"
"version": "0.120.0"
},
{
"name": "lfx",
Expand Down Expand Up @@ -632,7 +632,7 @@
"show": true,
"title_case": false,
"type": "code",
"value": "from collections.abc import Generator\nfrom typing import Any\n\nimport orjson\nfrom fastapi.encoders import jsonable_encoder\n\nfrom lfx.base.io.chat import ChatComponent\nfrom lfx.helpers.data import safe_convert\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, HandleInput, MessageTextInput\nfrom lfx.schema.data import Data\nfrom lfx.schema.dataframe import DataFrame\nfrom lfx.schema.message import Message\nfrom lfx.schema.properties import Source\nfrom lfx.template.field.base import Output\nfrom lfx.utils.constants import (\n MESSAGE_SENDER_AI,\n MESSAGE_SENDER_NAME_AI,\n MESSAGE_SENDER_USER,\n)\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n documentation: str = \"https://docs.langflow.org/chat-input-and-output\"\n icon = \"MessagesSquare\"\n name = \"ChatOutput\"\n minimized = True\n\n inputs = [\n HandleInput(\n name=\"input_value\",\n display_name=\"Inputs\",\n info=\"Message to be passed as output.\",\n input_types=[\"Data\", \"DataFrame\", \"Message\"],\n required=True,\n ),\n BoolInput(\n name=\"should_store_message\",\n display_name=\"Store Messages\",\n info=\"Store the message in the history.\",\n value=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[MESSAGE_SENDER_AI, MESSAGE_SENDER_USER],\n value=MESSAGE_SENDER_AI,\n advanced=True,\n info=\"Type of sender.\",\n ),\n MessageTextInput(\n name=\"sender_name\",\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=MESSAGE_SENDER_NAME_AI,\n advanced=True,\n ),\n MessageTextInput(\n name=\"session_id\",\n display_name=\"Session ID\",\n info=\"The session ID of the chat. If empty, the current session ID parameter will be used.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"context_id\",\n display_name=\"Context ID\",\n info=\"The context ID of the chat. Adds an extra layer to the local memory.\",\n value=\"\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"data_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n BoolInput(\n name=\"clean_data\",\n display_name=\"Basic Clean Data\",\n value=True,\n advanced=True,\n info=\"Whether to clean data before converting to string.\",\n ),\n ]\n outputs = [\n Output(\n display_name=\"Output Message\",\n name=\"message\",\n method=\"message_response\",\n ),\n ]\n\n def _build_source(self, id_: str | None, display_name: str | None, source: str | None) -> Source:\n source_dict = {}\n if id_:\n source_dict[\"id\"] = id_\n if display_name:\n source_dict[\"display_name\"] = display_name\n if source:\n # Handle case where source is a ChatOpenAI object\n if hasattr(source, \"model_name\"):\n source_dict[\"source\"] = source.model_name\n elif hasattr(source, \"model\"):\n source_dict[\"source\"] = str(source.model)\n else:\n source_dict[\"source\"] = str(source)\n return Source(**source_dict)\n\n async def message_response(self) -> Message:\n # First convert the input to string if needed\n text = self.convert_to_string()\n\n # Get source properties\n source, _, display_name, source_id = self.get_properties_from_source_component()\n\n # Create or use existing Message object\n if isinstance(self.input_value, Message) and not self.is_connected_to_chat_input():\n message = self.input_value\n # Update message properties\n message.text = text\n # Preserve existing session_id from the incoming message if it exists\n existing_session_id = message.session_id\n else:\n message = Message(text=text)\n existing_session_id = None\n\n # Set message properties\n message.sender = self.sender\n message.sender_name = self.sender_name\n # Preserve session_id from incoming message, or use component/graph session_id\n message.session_id = (\n self.session_id or existing_session_id or (self.graph.session_id if hasattr(self, \"graph\") else None) or \"\"\n )\n message.context_id = self.context_id\n message.flow_id = self.graph.flow_id if hasattr(self, \"graph\") else None\n message.properties.source = self._build_source(source_id, display_name, source)\n\n # Store message if needed\n if message.session_id and self.should_store_message:\n stored_message = await self.send_message(message)\n self.message.value = stored_message\n message = stored_message\n\n self.status = message\n return message\n\n def _serialize_data(self, data: Data) -> str:\n \"\"\"Serialize Data object to JSON string.\"\"\"\n # Convert data.data to JSON-serializable format\n serializable_data = jsonable_encoder(data.data)\n # Serialize with orjson, enabling pretty printing with indentation\n json_bytes = orjson.dumps(serializable_data, option=orjson.OPT_INDENT_2)\n # Convert bytes to string and wrap in Markdown code blocks\n return \"```json\\n\" + json_bytes.decode(\"utf-8\") + \"\\n```\"\n\n def _validate_input(self) -> None:\n \"\"\"Validate the input data and raise ValueError if invalid.\"\"\"\n if self.input_value is None:\n msg = \"Input data cannot be None\"\n raise ValueError(msg)\n if isinstance(self.input_value, list) and not all(\n isinstance(item, Message | Data | DataFrame | str) for item in self.input_value\n ):\n invalid_types = [\n type(item).__name__\n for item in self.input_value\n if not isinstance(item, Message | Data | DataFrame | str)\n ]\n msg = f\"Expected Data or DataFrame or Message or str, got {invalid_types}\"\n raise TypeError(msg)\n if not isinstance(\n self.input_value,\n Message | Data | DataFrame | str | list | Generator | type(None),\n ):\n type_name = type(self.input_value).__name__\n msg = f\"Expected Data or DataFrame or Message or str, Generator or None, got {type_name}\"\n raise TypeError(msg)\n\n def convert_to_string(self) -> str | Generator[Any, None, None]:\n \"\"\"Convert input data to string with proper error handling.\"\"\"\n self._validate_input()\n if isinstance(self.input_value, list):\n clean_data: bool = getattr(self, \"clean_data\", False)\n return \"\\n\".join([safe_convert(item, clean_data=clean_data) for item in self.input_value])\n if isinstance(self.input_value, Generator):\n return self.input_value\n return safe_convert(self.input_value)\n"
"value": "from collections.abc import Generator\nfrom typing import Any\n\nimport orjson\nimport uuid\nfrom fastapi.encoders import jsonable_encoder\n\nfrom lfx.base.io.chat import ChatComponent\nfrom lfx.helpers.data import safe_convert\nfrom lfx.inputs.inputs import BoolInput, DropdownInput, HandleInput, MessageTextInput\nfrom lfx.schema.data import Data\nfrom lfx.schema.dataframe import DataFrame\nfrom lfx.schema.message import Message\nfrom lfx.schema.properties import Source\nfrom lfx.template.field.base import Output\nfrom lfx.utils.constants import (\n MESSAGE_SENDER_AI,\n MESSAGE_SENDER_NAME_AI,\n MESSAGE_SENDER_USER,\n)\n\n\nclass ChatOutput(ChatComponent):\n display_name = \"Chat Output\"\n description = \"Display a chat message in the Playground.\"\n documentation: str = \"https://docs.langflow.org/components-io#chat-output\"\n icon = \"MessagesSquare\"\n name = \"ChatOutput\"\n minimized = True\n\n inputs = [\n HandleInput(\n name=\"input_value\",\n display_name=\"Inputs\",\n info=\"Message to be passed as output.\",\n input_types=[\"Data\", \"DataFrame\", \"Message\"],\n required=True,\n ),\n BoolInput(\n name=\"should_store_message\",\n display_name=\"Store Messages\",\n info=\"Store the message in the history.\",\n value=True,\n advanced=True,\n ),\n DropdownInput(\n name=\"sender\",\n display_name=\"Sender Type\",\n options=[MESSAGE_SENDER_AI, MESSAGE_SENDER_USER],\n value=MESSAGE_SENDER_AI,\n advanced=True,\n info=\"Type of sender.\",\n ),\n MessageTextInput(\n name=\"sender_name\",\n display_name=\"Sender Name\",\n info=\"Name of the sender.\",\n value=MESSAGE_SENDER_NAME_AI,\n advanced=True,\n ),\n MessageTextInput(\n name=\"session_id\",\n display_name=\"Session ID\",\n info=\"The session ID of the chat. If empty, the current session ID parameter will be used.\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"context_id\",\n display_name=\"Context ID\",\n info=\"The context ID of the chat. Adds an extra layer to the local memory.\",\n value=\"\",\n advanced=True,\n ),\n MessageTextInput(\n name=\"data_template\",\n display_name=\"Data Template\",\n value=\"{text}\",\n advanced=True,\n info=\"Template to convert Data to Text. If left empty, it will be dynamically set to the Data's text key.\",\n ),\n BoolInput(\n name=\"clean_data\",\n display_name=\"Basic Clean Data\",\n value=True,\n advanced=True,\n info=\"Whether to clean data before converting to string.\",\n ),\n ]\n outputs = [\n Output(\n display_name=\"Output Message\",\n name=\"message\",\n method=\"message_response\",\n ),\n ]\n\n def _build_source(self, id_: str | None, display_name: str | None, source: str | None) -> Source:\n source_dict = {}\n if id_:\n source_dict[\"id\"] = id_\n if display_name:\n source_dict[\"display_name\"] = display_name\n if source:\n # Handle case where source is a ChatOpenAI object\n if hasattr(source, \"model_name\"):\n source_dict[\"source\"] = source.model_name\n elif hasattr(source, \"model\"):\n source_dict[\"source\"] = str(source.model)\n else:\n source_dict[\"source\"] = str(source)\n return Source(**source_dict)\n\n async def message_response(self) -> Message:\n # First convert the input to string if needed\n text = self.convert_to_string()\n\n # Get source properties\n source, _, display_name, source_id = self.get_properties_from_source_component()\n\n # Create or use existing Message object\n if isinstance(self.input_value, Message):\n message = self.input_value\n # Update message properties\n message.text = text\n message.id = uuid.uuid4()\n else:\n message = Message(text=text)\n\n # Set message properties\n message.sender = self.sender\n message.sender_name = self.sender_name\n message.session_id = self.session_id\n message.context_id = self.context_id\n message.flow_id = self.graph.flow_id if hasattr(self, \"graph\") else None\n message.properties.source = self._build_source(source_id, display_name, source)\n\n # Store message if needed\n if self.session_id and self.should_store_message:\n stored_message = await self.send_message(message)\n self.message.value = stored_message\n message = stored_message\n\n self.status = message\n return message\n\n def _serialize_data(self, data: Data) -> str:\n \"\"\"Serialize Data object to JSON string.\"\"\"\n # Convert data.data to JSON-serializable format\n serializable_data = jsonable_encoder(data.data)\n # Serialize with orjson, enabling pretty printing with indentation\n json_bytes = orjson.dumps(serializable_data, option=orjson.OPT_INDENT_2)\n # Convert bytes to string and wrap in Markdown code blocks\n return \"```json\\n\" + json_bytes.decode(\"utf-8\") + \"\\n```\"\n\n def _validate_input(self) -> None:\n \"\"\"Validate the input data and raise ValueError if invalid.\"\"\"\n if self.input_value is None:\n msg = \"Input data cannot be None\"\n raise ValueError(msg)\n if isinstance(self.input_value, list) and not all(\n isinstance(item, Message | Data | DataFrame | str) for item in self.input_value\n ):\n invalid_types = [\n type(item).__name__\n for item in self.input_value\n if not isinstance(item, Message | Data | DataFrame | str)\n ]\n msg = f\"Expected Data or DataFrame or Message or str, got {invalid_types}\"\n raise TypeError(msg)\n if not isinstance(\n self.input_value,\n Message | Data | DataFrame | str | list | Generator | type(None),\n ):\n type_name = type(self.input_value).__name__\n msg = f\"Expected Data or DataFrame or Message or str, Generator or None, got {type_name}\"\n raise TypeError(msg)\n\n def convert_to_string(self) -> str | Generator[Any, None, None]:\n \"\"\"Convert input data to string with proper error handling.\"\"\"\n self._validate_input()\n if isinstance(self.input_value, list):\n clean_data: bool = getattr(self, \"clean_data\", False)\n return \"\\n\".join([safe_convert(item, clean_data=clean_data) for item in self.input_value])\n if isinstance(self.input_value, Generator):\n return self.input_value\n return safe_convert(self.input_value)\n"
},
"context_id": {
"_input_type": "MessageTextInput",
Expand Down
Loading