-
Notifications
You must be signed in to change notification settings - Fork 8.2k
⚡️ Speed up function get_cache_service by 1,131% in PR #10953 (fix/folders_download)
#10964
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
⚡️ Speed up function get_cache_service by 1,131% in PR #10953 (fix/folders_download)
#10964
Conversation
… to use Union for better clarity and compatibility
The optimized code achieves an **11x speedup** (1131%) by implementing two key caching strategies that eliminate redundant expensive operations: **1. Module-level Service Manager Caching** The original code calls `get_service_manager()` on every invocation, which is expensive. The optimization introduces a global `_service_manager_cache` that stores the service manager instance after first initialization. This eliminates: - Repeated imports of `lfx.services.manager` - Repeated `get_service_manager()` calls - Redundant factory registration checks and calls From the profiler data, we see the import and service manager creation (`get_service_manager()`) now only happen once instead of 38 times, reducing overhead significantly. **2. Factory Instance Caching in get_cache_service** The original code creates a new `CacheServiceFactory()` instance on every call. The optimization caches this factory as a function attribute using `hasattr` check, since the same factory instance can be reused safely. **Performance Impact Analysis** The line profiler shows the optimization maintains the same core expensive operation (`service_manager.get()` taking ~75% of time) while dramatically reducing setup overhead. The service manager initialization and factory registration still take ~24% of time but now occur only once rather than repeatedly. **Test Case Performance** Based on the annotated tests, this optimization particularly benefits: - **Repeated calls scenarios** (like `test_cache_service_reuse_instance`) where the same service is requested multiple times - **Large-scale operations** (like `test_cache_service_many_keys` with 500 operations) where service retrieval happens frequently - **Performance-critical paths** where cache service access is repeated The optimization is especially valuable in applications where `get_cache_service()` or `get_service()` are called frequently, as the first-call initialization cost is amortized across all subsequent calls.
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
Codecov Report❌ Patch coverage is
❌ Your project status has failed because the head coverage (40.02%) is below the target coverage (60.00%). You can increase the head coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## release-1.7.0 #10964 +/- ##
==============================================
Coverage 33.05% 33.06%
==============================================
Files 1368 1368
Lines 63815 63822 +7
Branches 9391 9391
==============================================
+ Hits 21093 21100 +7
Misses 41679 41679
Partials 1043 1043
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
|
⚡️ This pull request contains optimizations for PR #10953
If you approve this dependent PR, these changes will be merged into the original PR branch
fix/folders_download.📄 1,131% (11.31x) speedup for
get_cache_serviceinsrc/backend/base/langflow/services/deps.py⏱️ Runtime :
3.09 milliseconds→251 microseconds(best of146runs)📝 Explanation and details
The optimized code achieves an 11x speedup (1131%) by implementing two key caching strategies that eliminate redundant expensive operations:
1. Module-level Service Manager Caching
The original code calls
get_service_manager()on every invocation, which is expensive. The optimization introduces a global_service_manager_cachethat stores the service manager instance after first initialization. This eliminates:lfx.services.managerget_service_manager()callsFrom the profiler data, we see the import and service manager creation (
get_service_manager()) now only happen once instead of 38 times, reducing overhead significantly.2. Factory Instance Caching in get_cache_service
The original code creates a new
CacheServiceFactory()instance on every call. The optimization caches this factory as a function attribute usinghasattrcheck, since the same factory instance can be reused safely.Performance Impact Analysis
The line profiler shows the optimization maintains the same core expensive operation (
service_manager.get()taking ~75% of time) while dramatically reducing setup overhead. The service manager initialization and factory registration still take ~24% of time but now occur only once rather than repeatedly.Test Case Performance
Based on the annotated tests, this optimization particularly benefits:
test_cache_service_reuse_instance) where the same service is requested multiple timestest_cache_service_many_keyswith 500 operations) where service retrieval happens frequentlyThe optimization is especially valuable in applications where
get_cache_service()orget_service()are called frequently, as the first-call initialization cost is amortized across all subsequent calls.✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-pr10953-2025-12-10T21.56.05and push.