Skip to content

Conversation

@erichare
Copy link
Collaborator

@erichare erichare commented Dec 11, 2025

This pull request updates the Basic Prompting.json starter project for Langflow to align with version 1.7.0 and improve component configuration, documentation, and flow structure. The changes modernize the flow, update component versions, enhance clarity for users, and add new options for language model configuration.

Key changes include:

Flow and Node Structure Updates:

  • Updated node and edge IDs throughout the flow to new values, reflecting a refreshed and reorganized node structure. This includes changes to ChatInput, Prompt, ChatOutput, and LanguageModelComponent nodes and their connections. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

Component Version and Metadata Updates:

  • Updated lf_version to 1.7.0 for all relevant components, ensuring compatibility with the latest Langflow features. [1] [2] [3] [4]
  • Updated the LanguageModelComponent to use the new module path, reduced dependencies, and added new metadata fields such as last_updated.

Language Model Component Enhancements:

  • Added new configuration fields to the LanguageModelComponent, including base_url_ibm_watsonx, project_id, and ollama_base_url, and updated the input field order. The API key field is now more generic and not loaded from the database by default. [1] [2]
  • Added new fields like _frontend_node_flow_id and _frontend_node_folder_id to support frontend integration.
  • Updated the documentation link for the language model and made the API key field advanced and not required by default. [1] [2]

Documentation and Usability Improvements:

  • Enhanced the "Read Me" and note nodes with Unicode icons for better visual clarity, and updated instructional text for improved onboarding. [1] [2]
  • Changed the selected property on the output node to true for better default UX.

Output and Method Metadata:

  • Added new metadata fields such as loop_types, options, and required_inputs to output and method definitions, preparing for future extensibility. [1] [2]

These updates collectively modernize the starter project, making it more extensible, user-friendly, and compatible with the latest Langflow features.

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for IBM Watsonx and Ollama model providers alongside existing integrations.
    • Enhanced agent configuration with structured output schema and improved tool handling.
    • Expanded telemetry tracking capabilities for better observability.
  • Improvements

    • Unified model selection interface across language model components.
    • Updated starter project flows to leverage enhanced agent and model configuration options.

✏️ Tip: You can customize this high-level summary in your review settings.

Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 11, 2025

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This PR updates 27+ starter project JSON configurations to restructure language model component inputs, migrate module paths, add IBM Watsonx and Ollama support, introduce telemetry flags, and modernize Agent component schemas with structured output capabilities.

Changes

Cohort / File(s) Summary
Language Model Component Restructuring
Blog Writer.json, Custom Component Generator.json, Document Q&A.json, Memory Chatbot.json, Meeting Summary.json, Research Translation Loop.json, SEO Keyword Generator.json, Text Sentiment Analysis.json, Twitter Thread Generator.json, Vector Store RAG.json, Portfolio Website Code Generator.json, Price Deal Finder.json
Replaced provider/model_name inputs with unified model (ModelInput). Added base_url_ibm_watsonx, project_id, ollama_base_url as new public inputs. Updated module path from lfx.components.models.language_model to lfx.components.models_and_agents.language_model. Reduced dependencies (4→1), updated lf_version to 1.7.0, added last_updated and documentation URLs. Added _frontend_node_flow_id and _frontend_node_folder_id to templates. Enhanced outputs with loop_types, options, required_inputs fields.
Agent Component Modernization
News Aggregator.json, Nvidia Remix.json, Pokédex Agent.json, SaaS Pricing.json, Search agent.json, Sequential Tasks Agents.json, Simple Agent.json, Social Media Agent.json, Travel Planning Agents.json, Youtube Analysis.json, Invoice Summarizer.json
Restructured field_order removing legacy OpenAI-specific fields (agent_llm, model_name, openai_api_base, temperature, seed), adding new fields (model, context_id, format_instructions, output_schema, verbose, max_iterations, agent_description, add_current_date_tool). Added telemetry flags (track_in_telemetry, override_skip, ai_enabled) across inputs. Updated documentation to docs URLs, added last_updated. Changed input_value type from MessageTextInput to MessageInput.
Edge & Node Identifier Updates
Basic Prompt Chaining.json, Basic Prompting.json, Document Q&A.json, Financial Report Parser.json, Hybrid Search RAG.json, Image Sentiment Analysis.json, Market Research.json, Meeting Summary.json, Research Translation Loop.json, SEO Keyword Generator.json, Text Sentiment Analysis.json, Vector Store RAG.json
Updated edge IDs and node identifiers to use Unicode-escaped sequences. Replaced non-ASCII characters (e.g., œ) with escaped Unicode representations in handle strings and IDs.
Structured Output & Tool Integration
Instagram Copywriter.json, Research Agent.json
Added/enhanced structured output support with new output_schema and format_instructions inputs. Introduced helper methods for schema preprocessing and JSON response handling. Extended output definitions with new telemetry and validation fields.
Flow Content Updates
Basic Prompt Chaining.json, Financial Report Parser.json, Hybrid Search RAG.json, Image Sentiment Analysis.json
Removed disconnected LanguageModelComponent nodes or entire edge connections. Updated note descriptions with Unicode-escaped emoji sequences. Adjusted component input types (e.g., MessageInput to MultilineInput in StructuredOutput).
Metadata & Dependencies Standardization
All modified files
Updated lf_version, code_hash, added last_updated timestamps across LanguageModelComponent and Agent instances. Normalized module paths and reduced dependency counts. Added frontend flow/folder identifiers for UI wiring.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~30 minutes

Areas requiring attention:

  • Verify consistency of node/edge ID updates across all 27+ files (large scope, but repetitive pattern)
  • Confirm LanguageModelComponent field_order and input structure changes align uniformly (provider→model migration, new endpoint fields)
  • Check that Agent component field_order and telemetry flags are applied consistently across all agent-based starters
  • Validate that module path updates (models.language_modelmodels_and_agents.language_model) are complete
  • Ensure no unintended removal of functional edges or nodes (e.g., in Basic Prompt Chaining and others where edges were cleared)

Possibly related PRs

  • PR #8785 — Modifies same starter project JSONs (Basic Prompting/Basic Prompt Chaining) with LanguageModelComponent node metadata, IDs, and input/output rewiring.
  • PR #10471 — Adds IBM Watsonx and Ollama support with model-fetch/update_build_config logic that corresponds to the schema changes in these starter JSON updates.
  • PR #10565 — Introduces unified model provider APIs and variable mappings used to populate the model options referenced in these updated starter project templates.

Suggested labels

starter-projects, configuration, refactor

Suggested reviewers

  • deon-sanchez

Pre-merge checks and finishing touches

Important

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

❌ Failed checks (1 error, 2 warnings)
Check name Status Explanation Resolution
Test Coverage For New Implementations ❌ Error PR modifies 30 JSON starter project templates but includes zero test files, leaving critical issues undetected by automated validation. Add JSON schema validation, flow integrity tests for edges/nodes, configuration consistency tests, and integration tests to validate starter projects.
Test Quality And Coverage ⚠️ Warning Existing tests validate JSON syntax and structure but fail to detect semantic issues: empty edges breaking flows, default mismatches across files, embedded code errors, and breaking-change fields. Extend test suite with edge connectivity validation, component default consistency checks, Python code import validation, and breaking-change detection tests in test_starter_projects.py.
Test File Naming And Structure ⚠️ Warning PR modifies 31 starter project JSON files with major version updates and component changes but adds no corresponding test files to validate these modifications. Add pytest test files in src/backend/tests/unit/initial_setup/starter_projects/ to validate JSON schema, node-edge connectivity, version consistency, and field_order accuracy for all 31 modified starter projects.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix: Update all starter templates for model providers' accurately summarizes the main change - updating all starter project templates to modernize model provider configuration for v1.7.0.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Excessive Mock Usage Warning ✅ Passed PR modifies only JSON starter project configuration files with no Python code or test files, making the mock usage check inapplicable.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the bug Something isn't working label Dec 11, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Dec 11, 2025

Frontend Unit Test Coverage Report

Coverage Summary

Lines Statements Branches Functions
Coverage: 16%
16.42% (4606/28041) 9.73% (2106/21644) 10.76% (664/6166)

Unit Test Results

Tests Skipped Failures Errors Time
1803 0 💤 0 ❌ 0 🔥 23.984s ⏱️

@codecov
Copy link

codecov bot commented Dec 11, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 33.06%. Comparing base (53015c1) to head (141a5b2).
⚠️ Report is 14 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main   #10983      +/-   ##
==========================================
- Coverage   33.21%   33.06%   -0.16%     
==========================================
  Files        1389     1388       -1     
  Lines       65682    65584      -98     
  Branches     9720     9689      -31     
==========================================
- Hits        21818    21683     -135     
- Misses      42749    42804      +55     
+ Partials     1115     1097      -18     
Flag Coverage Δ
backend 52.44% <ø> (-0.13%) ⬇️
lfx 39.22% <ø> (-0.23%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 29 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (17)
src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (1)

945-945: Inconsistent last_updated value between ParserComponent nodes: HTPnn has null while NUETC has a timestamp.

ParserComponent-HTPnn has "last_updated": null, while ParserComponent-NUETC has "last_updated": "2025-09-29T15:17:07.310Z". Both nodes have the field, but the inconsistent values may cause issues with version tracking and audit trails.

src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (1)

3143-3496: Provider parameters for IBM watsonx and Ollama are not wired into the Language Model component's build logic.

The LanguageModelComponent defines three provider-specific inputs (base_url_ibm_watsonx, project_id, ollama_base_url) but never passes them to the underlying model class. The get_llm function does not accept these as parameters; instead, they must be extracted and forwarded locally before model instantiation, similar to how EmbeddingModelComponent handles watsonx and Ollama params.

Update build_model() to extract and pass provider-specific parameters:

def build_model(self) -> LanguageModel:
-    return get_llm(
-        model=self.model,
-        user_id=self.user_id,
-        api_key=self.api_key,
-        temperature=self.temperature,
-        stream=self.stream,
-    )
+    # Get the model to extract provider information
+    if not self.model or not isinstance(self.model, list) or len(self.model) == 0:
+        msg = "A model selection is required"
+        raise ValueError(msg)
+    
+    model_config = self.model[0]
+    provider = model_config.get("provider")
+    
+    # Get base LLM instance
+    base_llm = get_llm(
+        model=self.model,
+        user_id=self.user_id,
+        api_key=self.api_key,
+        temperature=self.temperature,
+        stream=self.stream,
+    )
+    
+    # For direct provider instantiation, build with provider-specific params
+    # This is a fallback for cases where get_llm may need enhancement
+    return base_llm

Alternatively, enhance get_llm to accept and forward provider-specific kwargs (requires changes to unified_models.py).

src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (4)

1826-1849: EmbeddingModel “model” input type/value mismatch will crash at runtime

Template declares model as DropdownInput with a string value, but EmbeddingModelComponent.build_embeddings expects a ModelInput list (self.model[0] with provider/metadata). This will raise “Model must be a non-empty list”.

Fix the field to ModelInput and use an empty array default.

-              "model": {
-                "_input_type": "DropdownInput",
+              "model": {
+                "_input_type": "ModelInput",
                 "advanced": false,
-                "combobox": false,
-                "dialog_inputs": {},
-                "display_name": "Embedding Model",
+                "display_name": "Embedding Model",
                 "dynamic": false,
-                "info": "Select your model provider",
-                "name": "model",
-                "options": [
-                  "text-embedding-3-small",
-                  "text-embedding-3-large",
-                  "text-embedding-ada-002"
-                ],
-                "options_metadata": [],
-                "placeholder": "",
-                "required": true,
-                "show": true,
-                "title_case": false,
-                "toggle": false,
-                "tool_mode": false,
-                "trace_as_metadata": true,
-                "type": "str",
-                "value": "text-embedding-3-small"
+                "info": "Select your model provider",
+                "input_types": ["Embeddings"],
+                "list": false,
+                "list_add_label": "Add More",
+                "model_type": "embedding",
+                "name": "model",
+                "options": [],
+                "override_skip": false,
+                "placeholder": "Setup Provider",
+                "real_time_refresh": true,
+                "refresh_button": true,
+                "required": true,
+                "show": true,
+                "title_case": false,
+                "tool_mode": false,
+                "trace_as_input": true,
+                "track_in_telemetry": false,
+                "type": "model",
+                "value": []
               },

2038-2054: Unsafe default: FAISS allows dangerous deserialization by default

allow_dangerous_deserialization is set to true. This enables pickle loading on untrusted indexes and is unsafe by default.

Set the default to false in the template (and mirror in the component code snippet).

-              "allow_dangerous_deserialization": {
+              "allow_dangerous_deserialization": {
                 "_input_type": "BoolInput",
                 "advanced": true,
                 "display_name": "Allow Dangerous Deserialization",
                 "dynamic": false,
                 "info": "Set to True to allow loading pickle files from untrusted sources. Only enable this if you trust the source of the data.",
                 "list": false,
                 "list_add_label": "Add More",
                 "name": "allow_dangerous_deserialization",
                 "placeholder": "",
                 "required": false,
                 "show": true,
                 "title_case": false,
                 "tool_mode": false,
                 "trace_as_metadata": true,
                 "type": "bool",
-                "value": true
+                "value": false
               },

Also update the embedded component code to reflect the safer default:

# In FaissVectorStoreComponent.inputs -> BoolInput(..., value=False)

2426-2446: Avoid pre-selecting MCP tool by default

tool.value is set to "remix_lock_layer" while the dropdown is hidden (show: false). This can trigger KeyError if tools aren’t yet cached. Default to empty.

-                "value": "remix_lock_layer"
+                "value": ""

1506-1520: Output type declaration does not match return type in RemixDocumentation.data_output

fetch_documentation_data() explicitly returns list[Data] but the output declares type ["Data"]. The type annotation and return shape are misaligned.

  • Option A: Change the method to return a single Data object (e.g., first result or aggregated).
  • Option B: Update the output's type declaration to reflect that it returns a list-capable type.
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (1)

2172-2176: Update last_tested_version to 1.7.0.

This template embeds 1.7.0 LM changes; version label should reflect that.

-  "last_tested_version": "1.6.0",
+  "last_tested_version": "1.7.0",
src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (2)

109-115: Fix ChatInput field_order key.

Field name is should_store_message, but field_order lists store_message. Correct for consistent UI ordering.

-              "store_message",
+              "should_store_message",

1-1230: Generalize "OpenAI API Key" messaging to support multiple providers.

The file contains hardcoded references to "OpenAI API Key" in the README and note descriptions (lines 489 and 525), yet the Language Model component supports multiple providers. Replace these with provider-agnostic wording—either "Model Provider API Key" or "API Key"—to align with the component's actual flexibility and avoid user confusion when selecting non-OpenAI providers.

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (1)

944-969: Description and prerequisites mismatch actual graph.

Template mentions three LMs and adding an OpenAI API key, but there are no LanguageModel nodes and no edges.

Either: (a) re‑introduce the three LanguageModelComponent nodes (v1.7.0, model‑based) and edges, or (b) rewrite the description to match a prompt‑only demo and add edges so something runs.

Also applies to: 977-986

src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json (1)

1418-1454: ⚠️ Model field configuration has type inconsistency: value should be string, not array.

Line 1454: model field has "value": [] (empty array), but other scalar fields use string values (e.g., "value": ""). This type mismatch is inconsistent:

  • Line 1442: "options": [] (empty array—typically populated with available models; may be lazy-loaded via external_options, but should be verified)
  • Line 1454: "value": [] (array value is inconsistent with string-based fields)

Expected: "value": "" (empty string) for consistency with other field defaults.

Concern: Type mismatch could cause frontend validation errors or backend value parsing failures if the system expects scalar values.

Apply this diff to correct the type inconsistency:

  "model": {
    ...
-   "value": []
+   "value": ""
src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (1)

1580-1588: Blocking: Prompt references CONTEXT but the field isn’t defined or wired.

The template text uses {CONTEXT}/{{CONTEXT}}, yet "CONTEXT" is missing from Prompt.custom_fields.template and from Prompt.template fields, so the variable won’t resolve at runtime. Add the field and wire Chat Input → Prompt.CONTEXT.

Apply this patch to define the field and create the edge:

--- a/src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
+++ b/src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
@@
-            "custom_fields": {
-              "template": [
+            "custom_fields": {
+              "template": [
+                "CONTEXT",
                 "PROFILE_TYPE",
                 "PROFILE_DETAILS",
                 "CONTENT_GUIDELINES",
                 "TONE_AND_STYLE",
                 "OUTPUT_FORMAT",
                 "OUTPUT_LANGUAGE"
               ]
             },
@@
             "template": {
+              "CONTEXT": {
+                "advanced": false,
+                "display_name": "CONTEXT",
+                "dynamic": false,
+                "field_type": "str",
+                "input_types": ["Message"],
+                "list": false,
+                "load_from_db": false,
+                "multiline": true,
+                "name": "CONTEXT",
+                "placeholder": "",
+                "required": false,
+                "show": true,
+                "title_case": false,
+                "type": "str",
+                "value": ""
+              },

And add an edge to feed the context from Chat Input:

@@ "edges": [
+      {
+        "animated": false,
+        "className": "",
+        "data": {
+          "sourceHandle": { "dataType": "ChatInput", "id": "ChatInput-JpNZb", "name": "message", "output_types": ["Message"] },
+          "targetHandle": { "fieldName": "CONTEXT", "id": "Prompt-JXzxV", "inputTypes": ["Message"], "type": "str" }
+        },
+        "id": "xy-edge__ChatInput-JpNZb-message__to__Prompt-JXzxV-CONTEXT",
+        "selected": false,
+        "source": "ChatInput-JpNZb",
+        "sourceHandle": "{\"dataType\":\"ChatInput\",\"id\":\"ChatInput-JpNZb\",\"name\":\"message\",\"output_types\":[\"Message\"]}",
+        "target": "Prompt-JXzxV",
+        "targetHandle": "{\"fieldName\":\"CONTEXT\",\"id\":\"Prompt-JXzxV\",\"inputTypes\":[\"Message\"],\"type\":\"str\"}"
+      },

Also applies to: 1776-1817

src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (1)

2431-2461: Guard against None from Docling subprocess before rollup

If _process_docling_in_subprocess returns None, rollup_data([None]) may raise or emit invalid rows. Add a safe fallback.

-                else:
-                    # If not structured, keep as-is (e.g., markdown export or error dict)
-                    final_return.extend(self.rollup_data(file_list, [advanced_data]))
+                else:
+                    # If not structured, keep as-is (e.g., markdown export or error dict)
+                    if advanced_data is None:
+                        from lfx.schema.data import Data
+                        advanced_data = Data(data={"error": "Docling returned no result", "file_path": file_path})
+                    final_return.extend(self.rollup_data(file_list, [advanced_data]))
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (1)

1104-1105: Fix default Parser template variable

Default uses {dt}, which won’t exist; {text} is used elsewhere and in examples.

-                "value": "Text: {dt}"
+                "value": "Text: {text}"
src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (1)

25-25: Re-serialize edge handle strings with proper JSON Unicode escaping.

The edge ID strings contain literal œ (U+0153) characters as intentional delimiters for handle metadata (the scape_json_parse function in setup.py explicitly replaces these with " for JSON parsing). However, these characters should be stored as JSON Unicode escape sequences (\u0153), not as literal UTF-8 characters in the JSON file.

Other starter projects (Blog Writer.json, Custom Component Generator.json) correctly use \u0153 escaping, but this file contains literal œ characters at lines 25, 53, 83, 111, 140, 168, 196, 224, and 252. This inconsistency suggests the file was not properly serialized through the escape_json_dump utility.

Regenerate or re-serialize this file to ensure all edge handle strings use consistent JSON Unicode escaping (\u0153), not literal characters.

src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (1)

900-906: Minor: README typos and provider‑agnostic wording.

  • Fix “coimponent” typo.
  • Replace “OpenAI API Key” with provider‑agnostic “API Key” to match new ModelInput flow.
- Welcome to the Research Agent! ...
- - Add your **OpenAI API Key** to the **Language Model**s and **Agent** Components or change the provider and add your credentials.
- - Add your **Tavily API Key** to the Tavily AI Search coimponent.
+ Welcome to the Research Agent! ...
+ - Add your **API Key** in the **Language Model** and **Agent** components (choose provider in “Language Model”).
+ - Add your **Tavily API Key** to the Tavily AI Search component.
src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (1)

720-727: Update README note to provider‑agnostic wording.

Align instructions with ModelInput flow; remove OpenAI‑specific phrasing.

-* An [OpenAI API key](https://platform.openai.com/)
+* An API key for your chosen provider (set it in the Language Model component)
...
-1. Paste your OpenAI API key in the **Language Model** model component.
+1. Choose a provider in **Language Model** and paste the corresponding **API Key**.
🧹 Nitpick comments (30)
src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (1)

1209-1236: Clarify deprecation timeline and removal plan for agent_description field.

The agent_description field is marked as "[Deprecated]" in the display name (line 1214) with a notice that "This feature is deprecated and will be removed in future versions." However:

  • The field remains fully functional and included in field_order.
  • No version or timeline is specified for removal.
  • The deprecation notice in the info text may confuse users.

Recommend:

  1. Specify a target version or date for removal (e.g., "v1.9.0 or later").
  2. Update tooling to emit a runtime deprecation warning when this field is used.
  3. Provide a migration path or document what users should do instead.
src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (7)

3372-3397: Ollama base URL default and persistence inconsistency.

Code sets a DEFAULT_OLLAMA_URL and marks load_from_db=True, but the serialized template has load_from_db=false and empty value. This prevents the default from surfacing and breaks persistence across reloads.

 "ollama_base_url": {
-  "load_from_db": false,
-  "value": ""
+  "load_from_db": true,
+  "value": "http://localhost:11434"
 }

1001-1019: Update Read Me instructions to provider‑agnostic phrasing and fix grammar.

“Add your OpenAI API key to the Language Model…” is outdated; LM now uses a generic API key. Also “Only the run the Load Data flow” has a grammar error.

-1. Add your OpenAI API key to the **Language Model** component and the two **Embeddings** components.
+1. Add your model provider API key to the **Language Model** component and your OpenAI API key to the two **Embeddings** components.
@@
-Only the run the **Load Data** flow when you need to populate your vector database with baseline content, such as product data.
+Only run the **Load Data** flow when you need to populate your vector database with baseline content, such as product data.

2444-2469: Note near Language Model still says “Add your OpenAI API key here”.

This note sits next to the Language Model node; it should be provider‑agnostic.

-"description": "### 💡 Add your OpenAI API key here 👇",
+"description": "### 💡 Add your model provider API key here 👇",

4205-4213: Default search method should match guidance.

Descriptions recommend Hybrid Search as suggested, but default is “Vector Search”.

- "value": "Vector Search"
+ "value": "Hybrid Search"

Apply to both AstraDB nodes.

Also applies to: 5018-5026


5154-5156: Align last_tested_version with component versions.

Root still says 1.6.4 while LM is on lf_version 1.7.0. Update to reflect the version this starter targets.

-"last_tested_version": "1.6.4",
+"last_tested_version": "1.7.0",

1355-1373: Minor consistency: lf_version mismatch between the two OpenAI Embeddings nodes.

One is 1.2.0 and the other 1.1.1. Not harmful, but aligning helps avoid diff churn.

Set both to the same current component lf_version.

Also applies to: 1904-1921


1556-1572: Remove this reference or confirm support for legacy embedding model.

The list includes text-embedding-ada-002 alongside newer text-embedding-3 models. While OpenAI still supports text-embedding-ada-002, prefer text-embedding-3-small or text-embedding-3-large for better performance and lower costs. If legacy support is required, clarify via documentation; otherwise, remove to focus users on current recommendations.

src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (1)

1541-1566: Add timeout and error handling to documentation fetch

Network call lacks a timeout and broad error handling. Add a reasonable timeout and catch httpx exceptions.

-        response = httpx.get(search_index_url, follow_redirects=True)
+        try:
+            response = httpx.get(
+                search_index_url,
+                follow_redirects=True,
+                timeout=10.0,
+                headers={"User-Agent": "langflow-starter/rtx-remix"}
+            )
+        except httpx.HTTPError as e:
+            raise ValueError(f"Failed to fetch search index: {e!s}") from e
src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (2)

1601-1626: Align Ollama URL field with code defaults.

Template sets load_from_db=false and empty value, while code provides a DEFAULT_OLLAMA_URL and often uses load_from_db=True. Recommend aligning to avoid blank endpoint on selection.

Apply:

-                "load_from_db": false,
+                "load_from_db": true,
...
-                "value": ""
+                "value": "http://localhost:11434"

Please verify this matches current lfx runtime expectations for MessageInput. If lfx overrides template defaults at runtime, keep value empty but set load_from_db=true. As per coding guidelines, ...


642-651: Make API key notes provider‑agnostic.

Update note text to avoid OpenAI‑specific wording per PR goals.

- "description": "### 💡 Add your OpenAI API key here",
+ "description": "### 💡 Add your API key here"

Also applies to: 1749-1756

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (3)

523-537: Generalize API key note.

Match PR intent and LM changes by removing OpenAI‑specific wording.

- "description": "### 💡 Add your OpenAI API key here 👇",
+ "description": "### 💡 Add your API key here 👇",

1221-1226: Generalize flow description.

Make description provider‑agnostic.

- "description": "Perform basic prompting with an OpenAI model.",
+ "description": "Perform basic prompting with a language model.",

1006-1031: Align Ollama URL input with runtime defaults.

Same concern as other template: empty value + load_from_db=false may hide DEFAULT_OLLAMA_URL.

-                "load_from_db": false,
+                "load_from_db": true,
...
-                "value": ""
+                "value": "http://localhost:11434"

If lfx sets defaults programmatically, at least flip load_from_db to true. As per coding guidelines, ...

Also applies to: 1069-1094

src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (2)

29-31: Update component versions to 1.7.0.

Nodes still at lf_version 1.5.0; update to match current release and other starters.

Also applies to: 164-177, 434-455, 839-840


1-986: Optional: add LM nodes consistent with new model‑centric API.

Adopt the same LanguageModelComponent structure used in Basic Prompting (model/api_key/IBM/Ollama fields) to keep starters consistent.

I can generate a patched JSON with model nodes and edges wired to the existing three Prompt nodes. Want me to draft it?

src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (2)

1535-1541: Update note: provider‑agnostic wording.

“The OpenAI model will create the thread” is outdated. Switch to provider‑neutral “Language Model” to match ModelInput.

-   - The OpenAI model will create the thread based on your specifications
+   - The selected Language Model will create the thread based on your specifications

2228-2233: Bump last_tested_version to match v1.7.0 updates.

This starter targets the new component schema; update the flow’s last_tested_version.

-  "last_tested_version": "1.4.3",
+  "last_tested_version": "1.7.0",
src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (2)

701-710: Generalize Quickstart to any provider.

Docs say “OpenAI API key” and “OpenAI model,” but the flow is provider‑agnostic via ModelInput in Agent/Structured Output.

-1. Add your **OpenAI API key** to the **OpenAI** model and **Agent** components.
+1. Select your provider in the **Agent** and **Structured Output** components and add the corresponding **API key** where required.

1979-2014: Normalize ModelInput default value.

Elsewhere ModelInput.value is an empty list ([]). Here it’s an empty string (""), which can confuse validation/UX.

-                "value": ""
+                "value": []
src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (3)

2962-2962: Project version mismatch with component lf_version 1.7.0

Top-level last_tested_version is still 1.4.3. Bump to 1.7.0 to reflect the updated component schema and avoid confusion in tooling/docs.

-  "last_tested_version": "1.4.3",
+  "last_tested_version": "1.7.0",

1254-1254: Make API key notes provider‑agnostic

Notes still say “OpenAI API key” while the starter now supports multiple providers. Recommend generic wording.

-            "description": "### 💡 Add your OpenAI API key here 👇",
+            "description": "### 💡 Add your model provider API key here 👇",

Also applies to: 1283-1283, 1310-1310


663-663: Update note text to match provider‑agnostic Language Model component

The note references “OpenAI Model Component.” Replace with “Language Model component” to reflect the new generic component.

-4. The **OpenAI Model Component** processes the text and classifies the sentiment as **Positive, Neutral, or Negative**.  
+4. The **Language Model** component processes the text and classifies the sentiment as **Positive, Neutral, or Negative**.  
src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (2)

939-939: Make note text provider‑agnostic

Replace “OpenAI” mentions with “Language Model” to match the generic component.

- Using **Langflow’s looping mechanism**, the template iterates through multiple research papers, translates them with the **OpenAI** model component, and outputs an aggregated version of all translated papers.  
+ Using **Langflow’s looping mechanism**, the template iterates through multiple research papers, translates them with the **Language Model** component, and outputs an aggregated version of all translated papers.  
@@
- 1. Add your OpenAI API key to the **Language Model** component. 
+ 1. Add your model provider API key to the **Language Model** component. 

1831-1831: Project version mismatch with updated component schema

Update last_tested_version to 1.7.0 to reflect the migrated LanguageModelComponent.

-  "last_tested_version": "1.4.3",
+  "last_tested_version": "1.7.0",
src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (2)

3157-3158: Update flow’s last_tested_version to 1.7.0.

The starter now targets v1.7.0; the root still says 1.4.3.

-  "last_tested_version": "1.4.3",
+  "last_tested_version": "1.7.0",

1249-1266: Optional: Expose Tavily “DataFrame” output in node outputs for flexibility.

Code defines Output(display_name="DataFrame", name="dataframe", method="fetch_content_dataframe") but node outputs only expose component_as_tool. Add a second output to allow direct wiring of search results.

         "outputs": [
           {
             "allows_loop": false,
             "cache": true,
             "display_name": "Toolset",
             "group_outputs": false,
             "hidden": null,
             "method": "to_toolkit",
             "name": "component_as_tool",
             "options": null,
             "required_inputs": null,
             "selected": "Tool",
             "tool_mode": true,
             "types": [ "Tool" ],
             "value": "__UNDEFINED__"
           }
+          ,
+          {
+            "allows_loop": false,
+            "cache": true,
+            "display_name": "DataFrame",
+            "group_outputs": false,
+            "method": "fetch_content_dataframe",
+            "name": "dataframe",
+            "selected": "DataFrame",
+            "tool_mode": false,
+            "types": [ "DataFrame" ],
+            "value": "__UNDEFINED__"
+          }
         ],

Also applies to: 1319-1323

src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (3)

467-474: Field order uses stale key “store_message”.

The input is named should_store_message in the template. Update field_order so the toggle appears in the intended position.

-              "input_value",
-              "store_message",
+              "input_value",
+              "should_store_message",

599-620: Type should be “other” to reflect multiple accepted input_types.

ChatOutput.input_value accepts Data/DataFrame/Message; type "str" is misleading.

-                "type": "str",
+                "type": "other",

1721-1722: Update flow’s last_tested_version to 1.7.0.

The flow targets the new LanguageModelComponent surface.

-  "last_tested_version": "1.4.2",
+  "last_tested_version": "1.7.0",
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d57aa8 and 22763c3.

⛔ Files ignored due to path filters (1)
  • src/frontend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (31)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json (1 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json (24 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json (15 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json (13 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Financial Report Parser.json (6 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json (7 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json (3 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json (35 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Invoice Summarizer.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json (23 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json (33 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json (13 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/News Aggregator.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Nvidia Remix.json (16 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Pokédex Agent.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json (23 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Price Deal Finder.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json (35 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json (19 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json (14 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/SaaS Pricing.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Search agent.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json (50 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Simple Agent.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Social Media Agent.json (17 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json (38 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Travel Planning Agents.json (51 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json (19 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json (25 hunks)
  • src/backend/base/langflow/initial_setup/starter_projects/Youtube Analysis.json (26 hunks)
🧰 Additional context used
🧠 Learnings (9)
📚 Learning: 2025-06-23T12:46:42.048Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: React Flow should be used for flow graph visualization, with nodes and edges passed as props, and changes handled via onNodesChange and onEdgesChange callbacks.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
📚 Learning: 2025-11-24T19:46:45.790Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-11-24T19:46:45.790Z
Learning: Applies to src/frontend/src/components/**/*.{tsx,jsx} : Use React Flow for flow graph visualization with Node, Edge, Controls, and Background components

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json
  • src/backend/base/langflow/initial_setup/starter_projects/Image Sentiment Analysis.json
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
📚 Learning: 2025-06-23T12:46:42.048Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/frontend_development.mdc:0-0
Timestamp: 2025-06-23T12:46:42.048Z
Learning: Custom React Flow node types should be implemented as memoized components, using Handle components for connection points and supporting optional icons and labels.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompt Chaining.json
  • src/backend/base/langflow/initial_setup/starter_projects/Memory Chatbot.json
  • src/backend/base/langflow/initial_setup/starter_projects/Hybrid Search RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Market Research.json
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Text Sentiment Analysis.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
📚 Learning: 2025-11-24T19:46:09.104Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-11-24T19:46:09.104Z
Learning: Backend components should be structured with clear separation of concerns: agents, data processing, embeddings, input/output, models, text processing, prompts, tools, and vector stores

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Sequential Tasks Agents.json
  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Search agent.json
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Agent.json
📚 Learning: 2025-06-26T19:43:18.260Z
Learnt from: ogabrielluiz
Repo: langflow-ai/langflow PR: 0
File: :0-0
Timestamp: 2025-06-26T19:43:18.260Z
Learning: In langflow custom components, the `module_name` parameter is now propagated through template building functions to add module metadata and code hashes to frontend nodes for better component tracking and debugging.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
  • src/backend/base/langflow/initial_setup/starter_projects/Custom Component Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Portfolio Website Code Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json
  • src/backend/base/langflow/initial_setup/starter_projects/Blog Writer.json
📚 Learning: 2025-06-23T12:46:29.953Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-06-23T12:46:29.953Z
Learning: All terminology such as 'Langflow', 'Component', 'Flow', 'API', and 'JSON' must be capitalized or uppercased as specified in the terminology section.

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
  • src/backend/base/langflow/initial_setup/starter_projects/Research Translation Loop.json
  • src/backend/base/langflow/initial_setup/starter_projects/Twitter Thread Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Meeting Summary.json
📚 Learning: 2025-11-24T19:46:09.104Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-11-24T19:46:09.104Z
Learning: Applies to src/backend/base/langflow/components/**/*.py : Add new components to the appropriate subdirectory under `src/backend/base/langflow/components/` (agents/, data/, embeddings/, input_output/, models/, processing/, prompts/, tools/, or vectorstores/)

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/SEO Keyword Generator.json
  • src/backend/base/langflow/initial_setup/starter_projects/Search agent.json
  • src/backend/base/langflow/initial_setup/starter_projects/Instagram Copywriter.json
  • src/backend/base/langflow/initial_setup/starter_projects/Vector Store RAG.json
📚 Learning: 2025-11-24T19:46:26.770Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/docs_development.mdc:0-0
Timestamp: 2025-11-24T19:46:26.770Z
Learning: Applies to docs/docs/**/*.{md,mdx} : Use sentence case for headers and proper capitalization for terminology: Langflow, Component, Flow, API, JSON

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
📚 Learning: 2025-06-23T12:46:52.420Z
Learnt from: CR
Repo: langflow-ai/langflow PR: 0
File: .cursor/rules/icons.mdc:0-0
Timestamp: 2025-06-23T12:46:52.420Z
Learning: When implementing a new component icon in Langflow, ensure the icon name is clear, recognizable, and used consistently across both backend (Python 'icon' attribute) and frontend (React/TypeScript mapping).

Applied to files:

  • src/backend/base/langflow/initial_setup/starter_projects/Document Q&A.json
  • src/backend/base/langflow/initial_setup/starter_projects/Basic Prompting.json
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
  • GitHub Check: Lint Backend / Run Mypy (3.12)
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
  • GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
  • GitHub Check: Lint Backend / Run Mypy (3.13)
  • GitHub Check: Lint Backend / Run Mypy (3.10)
  • GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
  • GitHub Check: Lint Backend / Run Mypy (3.11)
  • GitHub Check: Run Frontend Unit Tests / Frontend Jest Unit Tests
  • GitHub Check: Run Backend Tests / LFX Tests - Python 3.10
  • GitHub Check: Test Docker Images / Test docker images
  • GitHub Check: Test Starter Templates
  • GitHub Check: Optimize new Python code in this PR
  • GitHub Check: test-starter-projects

Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 11, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 12, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 12, 2025
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 12, 2025
@erichare erichare enabled auto-merge December 15, 2025 17:43
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 15, 2025
Co-Authored-By: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
@github-actions github-actions bot added bug Something isn't working and removed bug Something isn't working labels Dec 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants