-
Notifications
You must be signed in to change notification settings - Fork 107
chore(l1): chunk storage request during snapsync #4821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Here are 2 runs compared, between main (before the revert of #4689 ) and this branch Note: both test had reverted #4599 that was affecting the baseline memory consumption Node Memory (RSS)
Also, now we don't have
|
Lines of code reportTotal lines added: Detailed view
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR optimizes memory consumption during snapshot sync by implementing chunked processing of storage requests. The changes address a memory surge where ~25% of memory was allocated for storage range requests by breaking large requests into smaller, manageable chunks.
Key changes:
- Introduced chunked processing with
STORAGE_ROOTS_PER_CHUNK
(10,000) andSTORAGE_ROOTS_PER_TASK
(300) constants - Refactored
request_storage_ranges
to process storage roots in chunks rather than all at once - Added new
process_storage_chunk
method to handle individual chunks
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
let task_span = STORAGE_ROOTS_PER_TASK.min(STORAGE_ROOTS_PER_CHUNK); | ||
// how many fully-populated task_span slices fit in | ||
let task_partition_count = total_roots.div_ceil(task_span); | ||
|
||
// list of tasks to be executed | ||
// Types are (start_index, end_index, starting_hash) | ||
// NOTE: end_index is NOT inclusive | ||
|
||
let mut tasks_queue_not_started = VecDeque::<StorageTask>::new(); | ||
for i in 0..chunk_count { | ||
let chunk_start = chunk_size * i; | ||
let chunk_end = (chunk_start + chunk_size).min(accounts_by_root_hash.len()); | ||
for i in 0..task_partition_count { | ||
let chunk_start = task_span * i; | ||
let chunk_end = ((i + 1) * task_span).min(total_roots); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[nitpick] The variable name task_span
is unclear. Consider renaming it to roots_per_task
or task_size
to better indicate it represents the number of storage roots processed per task.
Copilot uses AI. Check for mistakes.
Motivation
After some snap sync optimization (#4777) we detected a surge of memory consumption during storage requests
Description
What we saw were ~25% of the memory allocated for the request storage ranges

This PR chunks the storage request and removes the large memory allocations