-
Notifications
You must be signed in to change notification settings - Fork 8.2k
⚡️ Speed up function get_cache_service by 964% in PR #10953 (fix/folders_download)
#10956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release-1.7.0
Are you sure you want to change the base?
⚡️ Speed up function get_cache_service by 964% in PR #10953 (fix/folders_download)
#10956
Conversation
The optimization implements **factory instance caching** to eliminate repeated object creation overhead. The key change is in `get_cache_service()`, which now caches a single `CacheServiceFactory` instance using function attributes instead of creating a new factory on every call. **What changed:** - Added a conditional check `if not hasattr(get_cache_service, "_cache_service_factory")` to create the factory only once - Stores the factory instance as `get_cache_service._cache_service_factory` on the function object - Reuses the cached factory instance on subsequent calls **Why this is faster:** The original code created a new `CacheServiceFactory()` object on every call to `get_cache_service()`. Object instantiation in Python involves memory allocation, constructor execution, and attribute initialization. With 1025 calls in the profiler results, this created 1025 unnecessary factory objects. The optimization reduces this to a single factory creation (on first call) plus 1024 fast attribute lookups. The line profiler shows the factory creation time dropped from 187μs per call to just 524ns for the attribute check, with the expensive creation happening only once (36.4% of total time in one call vs 99.5% spread across all calls). **Performance impact:** - **963% speedup** (1.94ms → 183μs runtime) - Most effective for workloads with frequent cache service requests - Particularly beneficial in service-oriented architectures where `get_cache_service()` is called repeatedly during request processing - The caching eliminates redundant factory instantiation while preserving the same service manager behavior The optimization works well across all test scenarios, from basic single calls to large-scale operations with 1000+ service requests, making it universally beneficial without changing the API contract.
|
Important Review skippedBot user detected. To trigger a single review, invoke the You can disable this status message by setting the Comment |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## fix/folders_download #10956 +/- ##
=====================================================
Coverage 33.06% 33.06%
=====================================================
Files 1368 1368
Lines 63815 63817 +2
Branches 9391 9391
=====================================================
+ Hits 21100 21101 +1
- Misses 41671 41673 +2
+ Partials 1044 1043 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
⚡️ This pull request contains optimizations for PR #10953
If you approve this dependent PR, these changes will be merged into the original PR branch
fix/folders_download.📄 964% (9.64x) speedup for
get_cache_serviceinsrc/backend/base/langflow/services/deps.py⏱️ Runtime :
1.94 milliseconds→183 microseconds(best of160runs)📝 Explanation and details
The optimization implements factory instance caching to eliminate repeated object creation overhead. The key change is in
get_cache_service(), which now caches a singleCacheServiceFactoryinstance using function attributes instead of creating a new factory on every call.What changed:
if not hasattr(get_cache_service, "_cache_service_factory")to create the factory only onceget_cache_service._cache_service_factoryon the function objectWhy this is faster:
The original code created a new
CacheServiceFactory()object on every call toget_cache_service(). Object instantiation in Python involves memory allocation, constructor execution, and attribute initialization. With 1025 calls in the profiler results, this created 1025 unnecessary factory objects.The optimization reduces this to a single factory creation (on first call) plus 1024 fast attribute lookups. The line profiler shows the factory creation time dropped from 187μs per call to just 524ns for the attribute check, with the expensive creation happening only once (36.4% of total time in one call vs 99.5% spread across all calls).
Performance impact:
get_cache_service()is called repeatedly during request processingThe optimization works well across all test scenarios, from basic single calls to large-scale operations with 1000+ service requests, making it universally beneficial without changing the API contract.
✅ Correctness verification report:
🌀 Generated Regression Tests and Runtime
To edit these changes
git checkout codeflash/optimize-pr10953-2025-12-10T18.10.12and push.