-
Notifications
You must be signed in to change notification settings - Fork 8.2k
refactor: update lfx directory structure for improved developer experience #9833
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughRepository reorganizes LFX path references from src/lfx to src/core across CI workflows, Makefile targets, and pyproject workspace/testpaths. Numerous backend test files adjust imports (ordering, whitespace, and adding/removing specific helpers/components). No public API changes; logic stays the same aside from path updates. Changes
* For readability, representative paths shown; import-only edits span many files under Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks (2 passed, 1 warning)❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
51ce933 to
2b9c7c6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/release-lfx.yml (1)
61-69: Missing alpine Dockerfile — add src/core/docker/Dockerfile.alpinesrc/core/docker/Dockerfile is present; src/core/docker/Dockerfile.alpine is missing. Add the missing file or update workflow references (affects lines 61-69, 110-120, 145-166, 169-172, 175-179, 182-186, 191-201, 253-254).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/workflows/nightly_build.yml(1 hunks).github/workflows/release-lfx.yml(6 hunks).github/workflows/release.yml(4 hunks).github/workflows/release_nightly.yml(4 hunks)Makefile(2 hunks)pyproject.toml(2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
{src/backend/**/*.py,tests/**/*.py,Makefile}
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
{src/backend/**/*.py,tests/**/*.py,Makefile}: Run make format_backend to format Python code before linting or committing changes
Run make lint to perform linting checks on backend Python code
Files:
Makefile
{tests/**/*.py,Makefile}
📄 CodeRabbit inference engine (.cursor/rules/backend_development.mdc)
Run make unit_tests to execute backend unit tests
Files:
Makefile
🧠 Learnings (2)
📚 Learning: 2025-07-21T14:16:14.125Z
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/testing.mdc:0-0
Timestamp: 2025-07-21T14:16:14.125Z
Learning: Applies to src/backend/tests/**/*.py : Test timeout and performance constraints in backend Python tests using 'asyncio.wait_for' and timing assertions.
Applied to files:
pyproject.toml
📚 Learning: 2025-07-18T18:25:54.486Z
Learnt from: CR
PR: langflow-ai/langflow#0
File: .cursor/rules/backend_development.mdc:0-0
Timestamp: 2025-07-18T18:25:54.486Z
Learning: Applies to {src/backend/**/*.py,tests/**/*.py,Makefile} : Run make lint to perform linting checks on backend Python code
Applied to files:
.github/workflows/release-lfx.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 3
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 5
- GitHub Check: Lint Backend / Run Mypy (3.13)
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 1
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 4
- GitHub Check: Run Backend Tests / Unit Tests - Python 3.10 - Group 2
- GitHub Check: Run Backend Tests / Integration Tests - Python 3.10
- GitHub Check: Test Starter Templates
- GitHub Check: Run Ruff Check and Format
- GitHub Check: Update Starter Projects
🔇 Additional comments (6)
pyproject.toml (2)
265-266: Pytest discovery updated to include src/core/tests.LGTM; aligns with Makefile’s lfx_tests target which runs from src/core. No further action.
194-195: ```shell
#!/bin/bash
set -euo pipefailecho "Tool availability:"
command -v rg || echo "rg: not found"
command -v git || echo "git: not found"
echoPrefer ripgrep if available; include hidden/ignored files to avoid misses.
if command -v rg >/dev/null 2>&1; then
echo "Searching for literal 'src/lfx' (rg --hidden --no-ignore)..."
rg -n --hidden --no-ignore -S -- "src/lfx" || trueecho "Searching for 'src/lfx/' variants..."
rg -n --hidden --no-ignore -S -- "src/lfx/" || true
rg -n --hidden --no-ignore -S -- 'src/lfx"' || trueecho "Searching for bare token 'lfx' (word boundary)..."
rg -n --hidden --no-ignore -S '\blfx\b' || true
else
echo "rg not available; falling back to git grep or grep."
if command -v git >/dev/null 2>&1; then
git grep -n --untracked -e "src/lfx" || true
git grep -n --untracked -e "lfx" || true
else
grep -RIn --binary-files=without-match "src/lfx" . || true
grep -RIn --binary-files=without-match -E '\blfx\b' . || true
fi
fiecho
echo "Showing pyproject.toml context (approx lines 170-210):"
if [ -f pyproject.toml ]; then
sed -n '170,210p' pyproject.toml || true
else
echo "pyproject.toml not found"
fi</blockquote></details> <details> <summary>.github/workflows/release_nightly.yml (1)</summary><blockquote> `102-103`: **Core path migration for LFX steps is consistent.** cd src/core, dist path, and artifact name updates are coherent across build, test, and publish. Consider verifying artifact name consistency (“dist-nightly-lfx”) with the downstream download step (it matches here). Also applies to: 107-121, 124-127, 130-134, 141-142, 319-321 </blockquote></details> <details> <summary>Makefile (2)</summary><blockquote> `153-156`: **lfx_tests target now executes from src/core — good.** This aligns with pytest testpaths and the repo reorg. --- `420-457`: **Confirmed: delegated targets exist in src/core Makefile.** All targets referenced by the top-level Makefile are present with matching names: build, publish, publish_test (used by lfx_publish_testpypi), test, format, lint, clean, docker_build, docker_dev, docker_test. </blockquote></details> <details> <summary>.github/workflows/release.yml (1)</summary><blockquote> `356-364`: **LFX build/test/publish path changes to src/core look correct.** cd src/core, dist path, and artifact naming (“dist-lfx”) are consistent within this workflow. Also applies to: 366-370, 372-375, 379-383, 391-392, 401-402 </blockquote></details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
.github/workflows/nightly_build.yml
Outdated
| cd src/core && uv lock && cd ../.. | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add src/core/uv.lock to the commit.
You run uv lock inside src/core but don’t stage src/core/uv.lock. This can desync the committed lockfiles.
Apply:
- git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml uv.lock src/backend/base/uv.lock
+ git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml uv.lock src/backend/base/uv.lock src/core/uv.lockAlso applies to: 94-95
🤖 Prompt for AI Agents
In .github/workflows/nightly_build.yml around lines 92-93 (and similarly at
94-95), the workflow runs "uv lock" in src/core but never stages or commits
src/core/uv.lock; update the workflow to git-add and commit the generated
src/core/uv.lock after running uv lock (e.g., run git add src/core/uv.lock and
git commit -m "chore: update src/core/uv.lock" or conditionally commit if
changes exist), ensuring the lockfile is included in the repo so committed
lockfiles stay in sync.
Codecov Report✅ All modified and coverable lines are covered by tests. ❌ Your project check has failed because the head coverage (46.47%) is below the target coverage (55.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #9833 +/- ##
==========================================
- Coverage 21.47% 21.46% -0.01%
==========================================
Files 1074 1074
Lines 39650 39650
Branches 5418 5418
==========================================
- Hits 8513 8512 -1
- Misses 30993 30994 +1
Partials 144 144
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (17)
src/backend/tests/unit/components/data/test_s3_uploader_component.py (3)
44-46: Return the declared value from file_names_mappingDocstring says “empty list” but the fixture returns None.
@pytest.fixture def file_names_mapping(self): """Return an empty list since this component doesn't have version-specific files.""" + return []
82-95: Handle S3 bucket region correctlycreate_bucket without CreateBucketConfiguration fails outside us-east-1. Use session region and pass configuration when needed.
- def s3_bucket(self) -> str: + def s3_bucket(self) -> str: """Generate a unique bucket name (AWS requires globally unique names).""" - bucket_name = f"graphrag-test-bucket-{uuid.uuid4().hex[:8]}" + bucket_name = f"langflow-test-bucket-{uuid.uuid4().hex[:8]}" - # Initialize S3 client using environment variables for credentials - s3 = boto3.client("s3") + # Initialize S3 client using environment/configured region + session = boto3.session.Session() + region = session.region_name or os.getenv("AWS_REGION") or os.getenv("AWS_DEFAULT_REGION") or "us-east-1" + s3 = session.client("s3", region_name=region) try: - # Create an S3 bucket in your default region - s3.create_bucket(Bucket=bucket_name) + # Create bucket with region-aware configuration + if region == "us-east-1": + s3.create_bucket(Bucket=bucket_name) + else: + s3.create_bucket( + Bucket=bucket_name, + CreateBucketConfiguration={"LocationConstraint": region}, + ) yield bucket_name
95-106: Catch correct botocore exceptions during teardownboto3.exceptions.Boto3Error is uncommon; prefer botocore exceptions for client errors.
- finally: + finally: # Teardown: Delete the bucket and its contents try: # List and delete all objects in the bucket objects = s3.list_objects_v2(Bucket=bucket_name).get("Contents", []) for obj in objects: s3.delete_object(Bucket=bucket_name, Key=obj["Key"]) # Delete the bucket s3.delete_bucket(Bucket=bucket_name) - except boto3.exceptions.Boto3Error as e: + except Exception as e: pytest.fail(f"Error during teardown: {e}")Optionally, narrow to botocore:
+from botocore.exceptions import BotoCoreError, ClientError ... - except Exception as e: + except (BotoCoreError, ClientError) as e: pytest.fail(f"Error during teardown: {e}")src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (6)
43-43: Patch target mismatch with new module pathPatch the symbol in the relocated module namespace so the mock applies.
-@patch("lfx.components.ollama.ollama.ChatOllama") +@patch("core.lfx.components.ollama.ollama.ChatOllama")
71-71: Same here: update patch target to new pathConsistent with the directory move.
-@patch("lfx.components.ollama.ollama.ChatOllama") +@patch("core.lfx.components.ollama.ollama.ChatOllama")
81-83: httpx patch targets still point to old pathAdjust to core.lfx to ensure the httpx calls are intercepted.
-@patch("lfx.components.ollama.ollama.httpx.AsyncClient.post") -@patch("lfx.components.ollama.ollama.httpx.AsyncClient.get") +@patch("core.lfx.components.ollama.ollama.httpx.AsyncClient.post") +@patch("core.lfx.components.ollama.ollama.httpx.AsyncClient.get")
110-110: Update AsyncClient.get patch target to new module path-@patch("lfx.components.ollama.ollama.httpx.AsyncClient.get") +@patch("core.lfx.components.ollama.ollama.httpx.AsyncClient.get")
150-150: Update final httpx patch target to core.lfx-@patch("lfx.components.ollama.ollama.httpx.AsyncClient.get") +@patch("core.lfx.components.ollama.ollama.httpx.AsyncClient.get")
1-199: Fix remaining lfx. imports in backend tests*rg output shows many lingering "lfx." and ".lfx" import/patch occurrences under src/backend/tests (including test files and embedded test-data). Update all test imports/patch targets to the canonical path (e.g., core.lfx.*).
- Representative locations: src/backend/tests/base.py:8, src/backend/tests/conftest.py:31-33, src/backend/tests/unit/components/languagemodels/test_chatollama_component.py:5, src/backend/tests/unit/test_load_components.py:6-8, src/backend/tests/unit/test_lfx_reexport_modules.py:10-35 — plus many more under src/backend/tests/** and src/backend/tests/data/**.
- Also replace occurrences embedded in JSON/Python snippets inside src/backend/tests/data/.
- Remediation: run a targeted codemod or search-and-replace to change "from lfx." -> "from core.lfx." and 'patch("lfx.' -> 'patch("core.lfx.' (and analogous import forms), then re-run tests.
src/backend/tests/unit/components/data/test_api_request_component.py (1)
3-4: Fix: incorrect aiofiles path helper (AttributeError risk).
aiofiles.os.path.existsdoes not exist; useaiofiles.ospath.exists. Also pass a str/PathLike consistently.Apply:
import aiofiles import aiofiles.os +import aiofiles.ospath @@ - # Use async file operations - assert await aiofiles.os.path.exists(file_path) - async with aiofiles.open(file_path) as f: + # Use async file operations + assert await aiofiles.ospath.exists(str(file_path)) + async with aiofiles.open(str(file_path)) as f: saved_content = await f.read()Also applies to: 125-129
src/backend/tests/integration/components/astra/test_astra_component.py (1)
46-47: Add pytest asyncio marker to async testsBoth async tests lack @pytest.mark.asyncio, which can cause collection/runtime issues depending on pytest config. Stack it with the existing api_key_required marker.
@pytest.mark.api_key_required +@pytest.mark.asyncio async def test_base(astradb_client: DataAPIClient): @@ @pytest.mark.api_key_required +@pytest.mark.asyncio async def test_astra_embeds_and_search():Also applies to: 69-70
.github/workflows/release_nightly.yml (1)
107-121: Make nightly name/version parsing robust — use pyproject for version but handle name mismatchsrc/core/pyproject.toml contains name=lfx and version=0.1.12; replacing both name+version extraction with the suggested sed will return "lfx" and break the existing equality check for "lfx-nightly". Replace only the version parsing (or allow both names).
Suggested minimal patch:
- name=$(uv tree | grep 'lfx' | head -n 1 | awk '{print $1}') - version=$(uv tree | grep 'lfx' | head -n 1 | awk '{print $2}') + name=$(uv tree | grep 'lfx' | head -n 1 | awk '{print $1}') + version=$(sed -nE 's/^version\s*=\s*"([^"]+)".*/\1/p' src/core/pyproject.toml | head -n1) - if [ "$name" != "lfx-nightly" ]; then + if [ "$name" != "lfx-nightly" ] && [ "$name" != "lfx" ]; then echo "Name $name does not match lfx-nightly. Exiting the workflow." exit 1 fisrc/backend/tests/unit/components/processing/test_save_file_component.py (1)
93-95: Fix incorrect patch target for PathThe tests patch
Pathunderlfx.components.processing.save_to_file, but the component module issave_file. When unskipped, the mocks won’t apply. Update the patch target to match the module name.- with patch("lfx.components.processing.save_to_file.Path") as mock_path: + with patch("lfx.components.processing.save_file.Path") as mock_path:- with patch("lfx.components.processing.save_to_file.Path") as mock_path: + with patch("lfx.components.processing.save_file.Path") as mock_path:- with patch("lfx.components.processing.save_to_file.Path") as mock_path: + with patch("lfx.components.processing.save_file.Path") as mock_path:Also applies to: 121-123, 148-149
src/backend/tests/unit/components/processing/test_data_operations_component.py (2)
15-21: default_kwargs uses 'actions' instead of 'operations'Component and other tests use the 'operations' key. Using 'actions' here may break base-class setup that instantiates with default_kwargs.
Apply this diff:
- def default_kwargs(self): + def default_kwargs(self): """Return the default kwargs for the component.""" return { - "data": Data(data={"key1": "value1", "key2": "value2", "key3": "value3"}), - "actions": [{"name": "Select Keys"}], - "select_keys_input": ["key1", "key2"], + "data": Data(data={"key1": "value1", "key2": "value2", "key3": "value3"}), + "operations": [{"name": "Select Keys"}], + "select_keys_input": ["key1", "key2"], }
15-21: Replace 'actions' with 'operations' in test fixturesrc/backend/tests/unit/components/processing/test_data_operations_component.py (default_kwargs, lines 15–21) still returns "actions" while all instantiations use operations=[...]; change the key to "operations" and rerun tests.
src/backend/tests/unit/components/agents/test_agent_component.py (1)
223-226: Use lfx.schema.data for Data importReplace the langflow import with lfx to keep imports consistent and avoid ImportError.
File: src/backend/tests/unit/components/agents/test_agent_component.py:223-226
- from langflow.schema.data import Data + from lfx.schema.data import Datasrc/backend/tests/unit/components/data/test_url_component.py (1)
109-112: Incorrect expected row countTwo mocked documents should yield two rows, not four. Single-URL test asserts 1 row; keep consistency here.
- assert isinstance(result, DataFrame) - assert len(result) == 4 + assert isinstance(result, DataFrame) + assert len(result) == 2
🧹 Nitpick comments (59)
src/backend/tests/unit/components/tools/test_python_repl_tool.py (2)
2-2: Verify import path after lfx → core/lfx move.Given the directory rename, confirm whether the canonical path is now
core.lfx.components.processing. If so, switch to a compat import to keep tests green across branches.-from lfx.components.processing import PythonREPLComponent +try: + from core.lfx.components.processing import PythonREPLComponent +except ModuleNotFoundError: + # Backward-compat for older branches/CI shards + from lfx.components.processing import PythonREPLComponent
30-54: Add a short docstring to the test per backend testing guidelines.Improves clarity and meets the repo’s test documentation standard.
- def test_component_initialization(self, component_class, default_kwargs): + def test_component_initialization(self, component_class, default_kwargs): + """Ensure PythonREPLComponent exposes expected template fields and defaults.""" component = component_class(**default_kwargs)src/backend/tests/unit/components/bundles/youtube/test_youtube_transcript_component.py (1)
6-6: Reorder import to comply with PEP8/isort; also verify path stability after lfx → core/lfx moveThird-party imports should precede local-package imports. Place
youtube_transcript_apiabove thelfximports. Also, confirm thatlfx.*remains the public import path after relocating tocore/lfx; if not, update both the imports and the@patch("lfx.components...")targets accordingly.from unittest.mock import Mock, patch import pytest +from youtube_transcript_api import NoTranscriptFound, TranscriptsDisabled from lfx.components.youtube.youtube_transcripts import YouTubeTranscriptsComponent from lfx.schema import Data, DataFrame, Message -from youtube_transcript_api import NoTranscriptFound, TranscriptsDisabledsrc/backend/tests/integration/backward_compatibility/test_starter_projects.py (2)
3-3: Import grouping nit (extra blank line).PEP 8/isort typically groups third-party vs first-party once;
lfxandtests.*are both first-party—no blank line needed between them. Run make format_backend to let the formatter handle it.from lfx.schema.message import Message - from tests.api_keys import get_openai_api_key
9-16: Add a brief test docstring per backend test guidelines.Clarifies purpose and aligns with repository testing standards.
@pytest.mark.api_key_required async def test_1_0_15_basic_prompting(): + """Validates 1.0.15 starter 'Basic Prompting' flow produces a greeting using OpenAI.""" api_key = get_openai_api_key()src/backend/tests/unit/components/languagemodels/test_baidu_qianfan.py (1)
1-129: Align component tests with base test patternsThese component tests don’t use the prescribed base classes/fixtures (ComponentTestBaseWithClient/WithoutClient, file_names_mapping). Not blocking this rename-focused PR, but consider aligning in a follow-up to keep consistency and inherit shared setup.
Happy to draft a minimal refactor that plugs this test into the base class with fixtures for
component_class,default_kwargs, andfile_names_mapping.src/backend/tests/unit/components/data/test_s3_uploader_component.py (3)
14-19: Mark as external-API test and skip blockbusterAlign with test guidelines for external dependencies.
-@pytest.mark.skipif( +@pytest.mark.api_key_required +@pytest.mark.no_blockbuster +@pytest.mark.skipif( not os.environ.get("AWS_ACCESS_KEY_ID") or not os.environ.get("AWS_SECRET_ACCESS_KEY"), reason="Environment variable AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY is not defined.", ) class TestS3UploaderComponent(ComponentTestBaseWithoutClient):
84-84: Rename bucket prefix to project nameUse a Langflow-specific prefix for clarity in accounts with many test buckets.
-bucket_name = f"graphrag-test-bucket-{uuid.uuid4().hex[:8]}" +bucket_name = f"langflow-test-bucket-{uuid.uuid4().hex[:8]}"
108-137: Consider local mocking for S3 to reduce flakiness and costIf not explicitly needing live AWS, use moto/localstack for deterministic, fast tests; keep one CI job for live integration if desired.
src/backend/tests/unit/components/processing/test_lambda_filter.py (3)
25-27: Provide backward-compatibility file_names_mapping for the move.Tests should declare mappings for renamed component files to preserve compatibility across versions. Returning an empty list likely misses the requirement.
If your test base expects (old_path, new_path) tuples, consider:
- def file_names_mapping(self): - return [] + def file_names_mapping(self): + # Adjust tuple structure if your base expects a mapping object + return [ + ("lfx/components/processing/lambda_filter.py", "core/lfx/components/processing/lambda_filter.py"), + ]Please confirm the exact type/shape expected by ComponentTestBaseWithoutClient and adjust accordingly.
29-29: Mark async tests with pytest.mark.asyncio as per guidelines.Your async tests should be explicitly marked to avoid plugin/config surprises.
Apply:
@@ - async def test_successful_lambda_generation(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_successful_lambda_generation(self, component_class, default_kwargs): @@ - async def test_invalid_lambda_response(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_invalid_lambda_response(self, component_class, default_kwargs): @@ - async def test_lambda_with_large_dataset(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_lambda_with_large_dataset(self, component_class, default_kwargs): @@ - async def test_lambda_with_complex_data_structure(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_lambda_with_complex_data_structure(self, component_class, default_kwargs):Also applies to: 42-42, 50-50, 65-65
29-41: Add brief docstrings for tests (repo guideline).Each test should state intent; add 1–2 line docstrings to improve clarity.
Example:
@@ - @pytest.mark.asyncio - async def test_successful_lambda_generation(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_successful_lambda_generation(self, component_class, default_kwargs): + """Generates a valid lambda via LLM and filters items >15.""" @@ - @pytest.mark.asyncio - async def test_invalid_lambda_response(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_invalid_lambda_response(self, component_class, default_kwargs): + """Raises when LLM returns no valid lambda.""" @@ - @pytest.mark.asyncio - async def test_lambda_with_large_dataset(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_lambda_with_large_dataset(self, component_class, default_kwargs): + """Scales filtering to 2k items and validates count.""" @@ - @pytest.mark.asyncio - async def test_lambda_with_complex_data_structure(self, component_class, default_kwargs): + @pytest.mark.asyncio + async def test_lambda_with_complex_data_structure(self, component_class, default_kwargs): + """Filters nested categories structure by score.""" @@ - def test_validate_lambda(self, component_class): + def test_validate_lambda(self, component_class): + """Unit-test lambda string validator with valid/invalid cases.""" @@ - def test_get_data_structure(self, component_class): + def test_get_data_structure(self, component_class): + """Asserts summarized structure strings for diverse inputs."""Also applies to: 42-49, 50-64, 65-87, 88-102, 103-124
src/backend/tests/unit/components/vectorstores/test_mongodb_atlas.py (5)
14-18: Mark external/slow tests to skip blockbuster plugin.These tests hit MongoDB Atlas and can be slow; mark at class level to comply with test guidelines.
@pytest.mark.skipif( not os.environ.get("MONGODB_ATLAS_URI"), reason="Environment variable MONGODB_ATLAS_URI is not defined." ) +@pytest.mark.no_blockbuster class TestMongoVectorStoreComponent(ComponentTestBaseWithoutClient):
126-133: Remove flaky sleep; poll with timeout for eventual consistency.
time.sleep(5)makes the test brittle. Poll until desired count or timeout.component.set(search_query="dog") -results = component.search_documents() -time.sleep(5) # wait the results come from API - -assert len(results) == 2, "Expected 2 results for 'lazy dog' query" +deadline = time.time() + 30 +while True: + results = component.search_documents() + if len(results) == 2 or time.time() > deadline: + break +assert len(results) == 2, "Expected 2 results for 'dog' query"
64-72: Prefer public API assertions over private attributes, if available.Accessing
_collectionand_index_nameties tests to internals. If the component exposes getters or a status API, switch to those.
51-63: Consider explicit status check before asserting index type.If Atlas vector index creation is async, assert readiness (if an API exists) before checking type to reduce flakes.
If
verify_search_indexcan return readiness or raise until ready, prefer using that contract and then drop the extra listing/asserts.
73-88: Clean up created collections to avoid test data accumulation.Dropping test collections at teardown prevents cross-test interference and speeds re-runs.
You can add an autouse fixture to drop
default_kwargs["collection_name"](and"test_collection_metadata") after each test. Want me to draft it?src/backend/tests/unit/components/processing/test_structured_output_component.py (3)
35-38: Return an explicit empty mapping from file_names_mappingBase classes may expect a dict; returning None could cause surprises. Return {} explicitly.
@pytest.fixture def file_names_mapping(self): """Return the file names mapping for version-specific files.""" - + return {}
39-60: Add missing test docstring for clarity and to match guidelinesSeveral tests lack docstrings; add brief purpose lines. Example for this test:
- def test_successful_structured_output_generation_with_patch_with_config(self): + def test_successful_structured_output_generation_with_patch_with_config(self): + """Produces list of dicts when patched get_chat_result returns BaseModel with objects."""If you want, I can sweep this file and add docstrings to the remaining tests in one pass.
254-306: Consider adding pytest markers for external API testsFor real-model tests, consider repo’s standard markers (e.g., api_key_required, no_blockbuster) alongside skipif to integrate with CI plugins and selective runs.
Also applies to: 286-306, 399-419, 549-588
src/backend/tests/unit/components/languagemodels/test_chatollama_component.py (2)
11-199: Add brief docstrings to testsGuidelines ask each test to include a clear docstring. Add one-liners to explain purpose.
Example:
class TestChatOllamaComponent(ComponentTestBaseWithoutClient): @pytest.fixture def component_class(self): return ChatOllamaComponent @pytest.fixture def default_kwargs(self): return { ... } @patch("core.lfx.components.ollama.ollama.ChatOllama") async def test_build_model(self, mock_chat_ollama, component_class, default_kwargs): + """Builds model with expected args and returns underlying instance.""" ...
168-179: Consider parametrizing keep_alive casesParametrize "Keep" vs "Immediately" to reduce duplication.
- async def test_update_build_config_keep_alive(self): - component = ChatOllamaComponent() - build_config = {"keep_alive": {"value": None, "advanced": False}} - field_value = "Keep" - field_name = "keep_alive_flag" - updated_config = await component.update_build_config(build_config, field_value, field_name) - assert updated_config["keep_alive"]["value"] == "-1" - assert updated_config["keep_alive"]["advanced"] is True - field_value = "Immediately" - updated_config = await component.update_build_config(build_config, field_value, field_name) - assert updated_config["keep_alive"]["value"] == "0" - assert updated_config["keep_alive"]["advanced"] is True + @pytest.mark.parametrize( + "field_value,expected", + [("Keep", "-1"), ("Immediately", "0")], + ) + async def test_update_build_config_keep_alive(self, field_value, expected): + """Maps keep_alive_flag to correct value and marks as advanced.""" + component = ChatOllamaComponent() + build_config = {"keep_alive": {"value": None, "advanced": False}} + updated = await component.update_build_config(build_config, field_value, "keep_alive_flag") + assert updated["keep_alive"]["value"] == expected + assert updated["keep_alive"]["advanced"] is Truesrc/backend/tests/unit/services/variable/test_service.py (3)
23-29: Dispose async engine to avoid resource leaksClose the in-memory engine after the session fixture yields.
-async def session(): - engine = create_async_engine("sqlite+aiosqlite:///:memory:") - async with engine.begin() as conn: - await conn.run_sync(SQLModel.metadata.create_all) - async with AsyncSession(engine, expire_on_commit=False) as session: - yield session +async def session(): + engine = create_async_engine("sqlite+aiosqlite:///:memory:") + try: + async with engine.begin() as conn: + await conn.run_sync(SQLModel.metadata.create_all) + async with AsyncSession(engine, expire_on_commit=False) as session: + yield session + finally: + await engine.dispose()
15-15: Mark module as asyncio tests per guidelinesAdd a module-level marker so all async tests run without per-test decorators.
- +pytestmark = pytest.mark.asyncio
57-57: Drop no-op assertions
assert Trueadds no value; remove them.- assert TrueAlso applies to: 63-63
src/backend/tests/unit/components/data/test_api_request_component.py (3)
45-48: Close AsyncClient instances to avoid resource leaks.Introduce a module-scoped async client fixture and use it instead of constructing raw
httpx.AsyncClient()repeatedly.Apply:
@pytest.fixture async def component(self, component_class, default_kwargs): """Return a component instance.""" return component_class(**default_kwargs) + +@pytest.fixture +async def http_client(): + async with httpx.AsyncClient() as client: + yield client @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, ) @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, include_httpx_metadata=True, ) @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, save_to_file=True, ) @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, ) @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, timeout=1, ) @@ - result = await component.make_request( - client=httpx.AsyncClient(), + result = await component.make_request( + client=http_client, method="GET", url=url, include_httpx_metadata=True, follow_redirects=True, ) @@ - await component.make_request( - client=httpx.AsyncClient(), + await component.make_request( + client=http_client, method="INVALID", url=url, )Apply similarly to any other occurrences.
Also applies to: 76-80, 95-100, 114-119, 142-146, 158-163, 179-185, 298-302
1-16: Mark async tests per guideline.Add a module-level marker to run async tests explicitly.
from pathlib import Path @@ import respx from httpx import Response from lfx.components.data import APIRequestComponent from lfx.schema import Data from lfx.schema.dotdict import dotdict +pytestmark = pytest.mark.asyncio
49-334: Add short docstrings to tests.Guideline asks for a docstring on each test; comments exist but not docstrings. Convert top comment line inside each test into a brief docstring.
src/backend/tests/unit/test_chat_endpoint.py (2)
15-16: Mark async tests explicitly for consistency with guidelinesAdd a module-level pytestmark so all async tests run under asyncio regardless of runner defaults.
@@ import pytest from httpx import codes from langflow.services.database.models.flow import FlowUpdate from lfx.log.logger import logger from lfx.memory import aget_messages +pytestmark = pytest.mark.asyncio
408-411: Remove redundant inner import of asyncioasyncio is already imported at the top; the inner import is unnecessary.
- # Mock the cancel_flow_build function to raise CancelledError - import asyncio + # Mock the cancel_flow_build function to raise CancelledErrorsrc/backend/tests/integration/components/astra/test_astra_component.py (5)
47-47: Fix astradb_client type annotationThe fixture yields a database object, not a DataAPIClient. Adjust the annotation to the proper type to avoid confusion in IDEs/type checkers.
-from astrapy import DataAPIClient +from astrapy import DataAPIClient +from astrapy.db import Database-async def test_base(astradb_client: DataAPIClient): +async def test_base(astradb_client: Database):Also applies to: 4-6
154-155: Standardize OpenAI key retrieval using the helperUse get_openai_api_key() for consistency and to benefit from the test infra’s skip/validation behavior.
- collection_embedding_api_key=os.getenv("OPENAI_API_KEY"), + collection_embedding_api_key=get_openai_api_key(),
47-63: Add missing test docstringsPer backend testing guidelines, each test should have a clear docstring.
async def test_base(astradb_client: DataAPIClient): + """Smoke: build AstraDB vector store with OpenAI embeddings; expect empty search and collection creation.""" @@ async def test_astra_embeds_and_search(): + """Embed two docs and search for 'test1'; expect a single result."""Also applies to: 70-92
39-43: Avoid blanket Exception swallow in teardownCatching Exception and passing can hide real cleanup issues. Consider narrowing to the specific “collection not found” exception or at least logging at debug level.
109-110: Use public constructor or VectorServiceOptions.coerce(...) instead of private _from_dictVectorServiceOptions._from_dict is an internal API — replace with the public constructor (VectorServiceOptions(...)) or VectorServiceOptions.coerce(options).
collection_vector_service_options=VectorServiceOptions._from_dict(options), )Locations: src/backend/tests/integration/components/astra/test_astra_component.py — lines 109-110, 153-154, 198-199
src/backend/tests/integration/components/outputs/test_chat_output.py (1)
8-14: Add brief docstrings and async markers to tests.
Short one-line docstrings per test and marking async tests (e.g., pytest.mark.asyncio or anyio) will align with the backend testing guidelines and improve clarity.Also applies to: 16-22, 24-41
src/backend/tests/integration/components/helpers/test_parse_json_data.py (1)
11-22: Document tests and mark async execution.
Add concise docstrings to each test and decorate async tests with pytest.mark.asyncio (or anyio) per test guidelines.Also applies to: 37-56
src/backend/tests/unit/inputs/test_inputs.py (1)
30-38: Consider adding docstrings to unit tests.
Many tests in this file lack docstrings; add short descriptions for consistency with testing guidelines.src/backend/tests/integration/flows/test_basic_prompting.py (1)
11-21: Name and document the test for clarity.
Rename the function to something descriptive (e.g., test_simple_prompt_flow_without_llm) and add a short docstring.src/backend/tests/unit/components/processing/test_batch_run_component.py (1)
170-176: Unify comment language.
There are a few Portuguese comments; consider switching to English for consistency across the codebase.Also applies to: 189-197, 211-213
src/backend/tests/integration/components/prompts/test_prompt.py (1)
9-13: Make the test name and purpose explicit.
Rename the test (e.g., test_prompt_renders_template_with_var) and add a one-line docstring to match test conventions.src/backend/tests/unit/api/v1/test_files.py (2)
267-268: Avoid manually setting Content-Length for multipart uploads.
Overriding Content-Length for multipart/form-data can be inaccurate (boundary overhead) and brittle. Prefer not setting it and letting httpx compute it, or compute limits based on the actual file bytes server-side.Also applies to: 279-284
149-166: Mark async tests explicitly.
Decorate async tests with pytest.mark.asyncio (or anyio) to make async intent explicit and avoid reliance on global plugins.src/backend/tests/unit/api/v2/test_files.py (1)
16-16: Deduplicated session_scope import LGTM.
Consider reusing the sharedclientfixture from conftest to reduce duplicate AsyncClient/Lifespan setup here..github/workflows/nightly_build.yml (1)
33-37: Broaden cache key to include nested lockfiles (optional)This improves cache hits for subprojects that maintain their own
uv.lock.- cache-dependency-glob: "uv.lock" + cache-dependency-glob: "**/uv.lock"src/backend/tests/unit/components/data/test_directory_component.py (1)
111-127: Guard docs path to reduce CI flakiness (optional)If the docs folder moves or is absent in certain runners, this test can fail unnecessarily. Consider skipping when the path isn’t present.
- docs_path = Path(__file__).parent.parent.parent.parent.parent.parent.parent / "docs" / "docs" / "Components" + docs_path = Path(__file__).parent.parent.parent.parent.parent.parent.parent / "docs" / "docs" / "Components" + if not docs_path.exists(): + pytest.skip(f"Docs path not found: {docs_path}").github/workflows/release-lfx.yml (2)
110-120: Reduce repeatedcd src/coreby settingworking-directoryThis removes duplication and reduces chance of path drift.
Example for “Run LFX tests” and similar steps:
- - name: Run LFX tests - run: | - cd src/core - make test + - name: Run LFX tests + working-directory: src/core + run: make testAt the job level you can set:
release-lfx: name: Build and Release LFX runs-on: ubuntu-latest + defaults: + run: + working-directory: src/coreThen drop the
cd src/corelines in build/test/publish steps within the job.Also applies to: 169-172, 175-186, 199-201
61-69: Harden version/name detection — read pyproject.toml instead of parsinguv treeVerified sed extracts name/version from src/core/pyproject.toml (name=lfx, version=0.1.12). Replace both occurrences in .github/workflows/release-lfx.yml (lines 63–64 and 147–148) with:
- # Use uv tree to get package info, consistent with nightly workflow - name=$(uv tree | grep 'lfx' | head -n 1 | awk '{print $1}') - version=$(uv tree | grep 'lfx' | head -n 1 | awk '{print $2}') + # Read from pyproject.toml for robust parsing + name=$(sed -nE 's/^name\s*=\s*"([^"]+)".*/\1/p' pyproject.toml | head -n1) + version=$(sed -nE 's/^version\s*=\s*"([^"]+)".*/\1/p' pyproject.toml | head -n1)Also consider updating the cache glob to cover package locks:
- cache-dependency-glob: "uv.lock" + cache-dependency-glob: "**/uv.lock".github/workflows/release_nightly.yml (1)
102-103: Preferworking-directoryor job defaults over inlinecdFewer moving parts; easier maintenance.
Example:
- - name: Install LFX dependencies - run: cd src/core && uv sync + - name: Install LFX dependencies + working-directory: src/core + run: uv syncYou can also add:
build-nightly-lfx: runs-on: ubuntu-latest + defaults: + run: + working-directory: src/coreAlso applies to: 124-127, 130-134
Makefile (2)
153-156: Pin environments in tests with--frozenFor reproducibility, consider freezing deps in CI test runs.
- uv sync && \ + uv sync --frozen && \ uv run pytest tests/unit -v $(args)
420-457: DRY upsrc/corepath with a variableReduces duplication across LFX targets.
+# at top-level near other config +LFX_DIR=src/core … -lfx_build: - @cd src/core && make build +lfx_build: + @cd $(LFX_DIR) && make build … -lfx_publish: - @cd src/core && make publish +lfx_publish: + @cd $(LFX_DIR) && make publish … -lfx_publish_testpypi: - @cd src/core && make publish_test +lfx_publish_testpypi: + @cd $(LFX_DIR) && make publish_test … -lfx_test: - @cd src/core && make test +lfx_test: + @cd $(LFX_DIR) && make test … -lfx_format: - @cd src/core && make format +lfx_format: + @cd $(LFX_DIR) && make format … -lfx_lint: - @cd src/core && make lint +lfx_lint: + @cd $(LFX_DIR) && make lint … -lfx_clean: - @cd src/core && make clean +lfx_clean: + @cd $(LFX_DIR) && make clean … -lfx_docker_build: - @cd src/core && make docker_build +lfx_docker_build: + @cd $(LFX_DIR) && make docker_build … -lfx_docker_dev: - @cd src/core && make docker_dev +lfx_docker_dev: + @cd $(LFX_DIR) && make docker_dev … -lfx_docker_test: - @cd src/core && make docker_test +lfx_docker_test: + @cd $(LFX_DIR) && make docker_testsrc/backend/tests/unit/components/languagemodels/test_xai.py (1)
41-47: Add short docstrings to tests per repo guidelinesSeveral tests lack explanatory docstrings. Please add 1–2 line docstrings describing intent.
Also applies to: 48-67
src/backend/tests/unit/components/processing/test_split_text_component.py (1)
249-255: Network-dependent test can flakeThis test hits Wikipedia. Consider disabling blockbuster and/or marking to avoid network flakiness.
- def test_with_url_loader(self): + @pytest.mark.no_blockbuster + def test_with_url_loader(self):If you prefer, we can also mock URLComponent.fetch_content to make it deterministic.
src/backend/tests/unit/components/processing/test_type_converter_component.py (1)
99-116: Avoid state leakage across update_outputs assertions.Reusing the same dict risks accumulating outputs if update_outputs mutates in place. Use a fresh dict per assertion.
- frontend_node = {"outputs": []} - # Test with Message output - updated = component.update_outputs(frontend_node, "output_type", "Message") + updated = component.update_outputs({"outputs": []}, "output_type", "Message") assert len(updated["outputs"]) == 1 assert updated["outputs"][0]["name"] == "message_output" # Test with Data output - updated = component.update_outputs(frontend_node, "output_type", "Data") + updated = component.update_outputs({"outputs": []}, "output_type", "Data") assert len(updated["outputs"]) == 1 assert updated["outputs"][0]["name"] == "data_output" # Test with DataFrame output - updated = component.update_outputs(frontend_node, "output_type", "DataFrame") + updated = component.update_outputs({"outputs": []}, "output_type", "DataFrame") assert len(updated["outputs"]) == 1 assert updated["outputs"][0]["name"] == "dataframe_output"src/backend/tests/unit/components/logic/test_loop.py (2)
58-63: Rename shadowing variable for clarity.Avoid reusing vector_store for unrelated objects.
- vector_store = orjson.loads(json_loop_test) - data = vector_store["data"] - vector_store = FlowCreate(name="Flow", description="description", data=data, endpoint_name="f") - response = await client.post("api/v1/flows/", json=vector_store.model_dump(), headers=logged_in_headers) + payload = orjson.loads(json_loop_test) + data = payload["data"] + flow = FlowCreate(name="Flow", description="description", data=data, endpoint_name="f") + response = await client.post("api/v1/flows/", json=flow.model_dump(), headers=logged_in_headers)
125-129: Check HTTP status before parsing.Surface failures early when running the flow endpoint.
- response = await client.post(f"/api/v1/run/{flow_id}", json=payload, headers=headers) - data = response.json() + response = await client.post(f"/api/v1/run/{flow_id}", json=payload, headers=headers) + response.raise_for_status() + data = response.json()src/backend/tests/unit/components/vectorstores/test_graph_rag_component.py (2)
107-110: Fix copy-paste docstring on vector_store fixture.Docstring describes file_names_mapping; update to reflect the fixture purpose.
- def vector_store(self, animals: list[Document], embedding: DeterministicFakeEmbedding) -> InMemoryVectorStore: - """Return an empty list since this component doesn't have version-specific files.""" + def vector_store(self, animals: list[Document], embedding: DeterministicFakeEmbedding) -> InMemoryVectorStore: + """Initialize and populate an InMemoryVectorStore with test documents."""
113-115: Return an explicit empty list from file_names_mapping.Current fixture returns None; be explicit to avoid ambiguity.
@pytest.fixture def file_names_mapping(self): """Return an empty list since this component doesn't have version-specific files.""" + return []src/backend/tests/unit/components/models/test_embedding_model_component.py (1)
27-31: Return an explicit empty list from file_names_mapping.Avoid returning None; make the intent clear.
@pytest.fixture def file_names_mapping(self): """Return the file names mapping for version-specific files.""" - # No version-specific files for this component - return [] + # No version-specific files for this component + return []
| uv lock | ||
| cd src/backend/base && uv lock && cd ../../.. | ||
| cd src/lfx && uv lock && cd ../.. | ||
| cd src/core && uv lock && cd ../.. | ||
| git add pyproject.toml src/backend/base/pyproject.toml src/lfx/pyproject.toml uv.lock src/backend/base/uv.lock | ||
| git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml uv.lock src/backend/base/uv.lock | ||
| git commit -m "Update version and project name" | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include src/core/uv.lock in commit to avoid lockfile drift
You run uv lock in src/core (Line 92) but don’t stage src/core/uv.lock (Line 94). This can desync the core package versioning in nightly tags.
Apply this diff:
- git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml uv.lock src/backend/base/uv.lock
+ git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml \
+ uv.lock src/backend/base/uv.lock src/core/uv.lock📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| uv lock | |
| cd src/backend/base && uv lock && cd ../../.. | |
| cd src/lfx && uv lock && cd ../.. | |
| cd src/core && uv lock && cd ../.. | |
| git add pyproject.toml src/backend/base/pyproject.toml src/lfx/pyproject.toml uv.lock src/backend/base/uv.lock | |
| git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml uv.lock src/backend/base/uv.lock | |
| git commit -m "Update version and project name" | |
| uv lock | |
| cd src/backend/base && uv lock && cd ../../.. | |
| cd src/core && uv lock && cd ../.. | |
| git add pyproject.toml src/backend/base/pyproject.toml src/core/pyproject.toml \ | |
| uv.lock src/backend/base/uv.lock src/core/uv.lock | |
| git commit -m "Update version and project name" |
🤖 Prompt for AI Agents
.github/workflows/nightly_build.yml around lines 90 to 96: the workflow runs `uv
lock` in src/core but does not stage src/core/uv.lock, which can cause lockfile
drift; update the git add command to include src/core/uv.lock (i.e., add that
path to the list of files being staged) so the newly generated core lockfile is
committed alongside the other pyproject and uv.lock files.
| import pytest | ||
| from langchain_ollama import ChatOllama | ||
|
|
||
| from lfx.components.ollama.ollama import ChatOllamaComponent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken import after directory rename: switch to core.lfx path
PR renames lfx → core/lfx. Update the import to avoid import errors.
-from lfx.components.ollama.ollama import ChatOllamaComponent
+from core.lfx.components.ollama.ollama import ChatOllamaComponent📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from lfx.components.ollama.ollama import ChatOllamaComponent | |
| from core.lfx.components.ollama.ollama import ChatOllamaComponent |
🤖 Prompt for AI Agents
In src/backend/tests/unit/components/languagemodels/test_chatollama_component.py
around line 5, the import "from lfx.components.ollama.ollama import
ChatOllamaComponent" is broken after the repo rename; update the import path to
the new package root by changing it to "from core.lfx.components.ollama.ollama
import ChatOllamaComponent" (or the equivalent new module path used across the
codebase) so the test imports the component from the renamed directory.
| from lfx.components.processing.lambda_filter import LambdaFilterComponent | ||
| from lfx.schema import Data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imports likely broken after rename to core/lfx — update paths.
Given the PR moves lfx to core/lfx, these imports will fail. Update to the new package path.
Apply:
-from lfx.components.processing.lambda_filter import LambdaFilterComponent
-from lfx.schema import Data
+from core.lfx.components.processing.lambda_filter import LambdaFilterComponent
+from core.lfx.schema import Data📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| from lfx.components.processing.lambda_filter import LambdaFilterComponent | |
| from lfx.schema import Data | |
| from core.lfx.components.processing.lambda_filter import LambdaFilterComponent | |
| from core.lfx.schema import Data |
🤖 Prompt for AI Agents
In src/backend/tests/unit/components/processing/test_lambda_filter.py around
lines 4 to 5, the imports still reference the old lfx package and must be
updated to the new core/lfx location; change the import paths to use core.lfx
(e.g., core.lfx.components.processing.lambda_filter and core.lfx.schema) so the
test imports resolve correctly.
| import pytest | ||
| from langchain_community.embeddings.fake import DeterministicFakeEmbedding | ||
| from pymongo.collection import Collection | ||
|
|
||
| from lfx.components.mongodb import MongoVectorStoreComponent | ||
| from lfx.schema.data import Data | ||
| from pymongo.collection import Collection | ||
|
|
||
| from tests.base import ComponentTestBaseWithoutClient, VersionComponentMapping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix import grouping to satisfy isort/ruff (third-party before local).
pymongo is a third-party import and should be grouped with pytest and langchain_community, not after local lfx imports. This likely trips format/lint in CI.
Apply this diff:
import pytest
from langchain_community.embeddings.fake import DeterministicFakeEmbedding
-from lfx.components.mongodb import MongoVectorStoreComponent
-from lfx.schema.data import Data
-from pymongo.collection import Collection
+from pymongo.collection import Collection
+from lfx.components.mongodb import MongoVectorStoreComponent
+from lfx.schema.data import DataAlso run: make format_backend && make lint.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import pytest | |
| from langchain_community.embeddings.fake import DeterministicFakeEmbedding | |
| from pymongo.collection import Collection | |
| from lfx.components.mongodb import MongoVectorStoreComponent | |
| from lfx.schema.data import Data | |
| from pymongo.collection import Collection | |
| from tests.base import ComponentTestBaseWithoutClient, VersionComponentMapping | |
| import pytest | |
| from langchain_community.embeddings.fake import DeterministicFakeEmbedding | |
| from pymongo.collection import Collection | |
| from lfx.components.mongodb import MongoVectorStoreComponent | |
| from lfx.schema.data import Data | |
| from tests.base import ComponentTestBaseWithoutClient, VersionComponentMapping |
🤖 Prompt for AI Agents
In src/backend/tests/unit/components/vectorstores/test_mongodb_atlas.py around
lines 5 to 11, the import ordering incorrectly places the third-party pymongo
import after local lfx imports; reorder imports so third-party packages (pytest,
langchain_community, pymongo) come before local project imports (lfx, tests.*),
then run make format_backend && make lint to ensure isort/ruff compliance.
edwinjosechittilappilly
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
code LGTM,
will test it soon
|
@coderabbitai please check for potential folder name conflicts to improve developer experience. |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@edwinjosechittilappilly I'll analyze the codebase to identify potential folder name conflicts that could impact developer experience with the new 🏁 Script executed: Length of output: 2089 🏁 Script executed: Length of output: 30728 🏁 Script executed: Length of output: 2080 Based on my analysis of the codebase structure, I've identified several potential naming conflicts and developer experience issues with the new 🚨 Critical Naming Conflicts Found1. Existing
|
|
|
shall we close this PR? |



Rename the lfx directory to core/lfx to enhance clarity and organization within the project. This change aims to improve the developer experience by providing a more intuitive structure.
Summary by CodeRabbit