⚡️ Speed up function _expand_edge by 13% in PR #10785 (feat/no-code-pre-built-component)
#10787
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
⚡️ This pull request contains optimizations for PR #10785
If you approve this dependent PR, these changes will be merged into the original PR branch
feat/no-code-pre-built-component.📄 13% (0.13x) speedup for
_expand_edgeinsrc/backend/base/langflow/processing/expand_flow.py⏱️ Runtime :
2.10 milliseconds→1.85 milliseconds(best of103runs)📝 Explanation and details
The optimized code achieves a 13% speedup through several targeted micro-optimizations that reduce function call overhead and improve attribute access patterns:
Key Optimizations Applied:
Import hoisting: Moved
escape_json_dumpimport to module scope, eliminating 450 repeated import lookups (94.7% of_encode_handletime in profiler).Attribute access caching: Stored
compact_edge.source,compact_edge.target, etc. in local variables (src,tgt,src_out,tgt_in) to avoid repeated attribute lookups throughout the function.Generator expression to for-loop: Replaced
next((o for o in source_outputs if o.get("name") == compact_edge.source_output), None)with an explicit for-loop with early break. This eliminates generator overhead and function call overhead fromnext().Type checking optimization: Changed
isinstance(target_field, dict)totype(target_field) is dict, which is faster for exact type matches in CPython.Inlined dictionary creation: Replaced helper function calls
_build_source_handle_data()and_build_target_handle_data()with direct dictionary construction, eliminating function call overhead (20.8% of total time).Direct function calls: Replaced
_encode_handle()calls with directescape_json_dump()calls, removing an unnecessary function wrapper layer.String concatenation: Used explicit string concatenation instead of f-strings for
edge_idconstruction, which is marginally faster for multiple concatenations.Performance Impact:
The optimizations are particularly effective for the test cases with large numbers of nodes and edges (like
test_large_scale_multiple_edgeswith 100 edges), where the cumulative effect of eliminating function call overhead and repeated attribute access becomes significant. For basic test cases, the improvements are more modest but still measurable due to the high-frequency nature of import and attribute access operations.These optimizations maintain identical behavior while reducing execution time from 2.10ms to 1.85ms, making the function more efficient for flow processing workloads that may call this function repeatedly.
✅ Correctness verification report:
⚙️ Existing Unit Tests and Runtime
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-pr10785-2025-11-28T20.23.51and push.