Skip to content

Conversation

rodrigo-o
Copy link
Collaborator

Motivation

After some snap sync optimization (#4777) we detected a surge of memory consumption during storage requests

Description

What we saw were ~25% of the memory allocated for the request storage ranges
image

This PR chunks the storage request and removes the large memory allocations

@rodrigo-o rodrigo-o changed the title Storage request chunking chore(l1): chunk storage request during snapsync Oct 8, 2025
@github-actions github-actions bot added the L1 Ethereum client label Oct 8, 2025
@rodrigo-o
Copy link
Collaborator Author

Here are 2 runs compared, between main (before the revert of #4689 ) and this branch

Note: both test had reverted #4599 that was affecting the baseline memory consumption

Node Memory (RSS)

Main This PR
Mean: 9.2GB & Max: 20GB Mean: 9GB & Max: 16GB
image image

Also, now we don't have storage_request_ranges as part of the flamegraph at the peak points during the storage request

Main This PR
image image

Copy link

github-actions bot commented Oct 8, 2025

Lines of code report

Total lines added: 48
Total lines removed: 0
Total lines changed: 48

Detailed view
+----------------------------------------------+-------+------+
| File                                         | Lines | Diff |
+----------------------------------------------+-------+------+
| ethrex/crates/networking/p2p/peer_handler.rs | 1845  | +48  |
+----------------------------------------------+-------+------+

@rodrigo-o rodrigo-o marked this pull request as ready for review October 8, 2025 21:20
@rodrigo-o rodrigo-o requested a review from a team as a code owner October 8, 2025 21:20
@Copilot Copilot AI review requested due to automatic review settings October 8, 2025 21:20
@ethrex-project-sync ethrex-project-sync bot moved this to In Review in ethrex_l1 Oct 8, 2025
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR optimizes memory consumption during snapshot sync by implementing chunked processing of storage requests. The changes address a memory surge where ~25% of memory was allocated for storage range requests by breaking large requests into smaller, manageable chunks.

Key changes:

  • Introduced chunked processing with STORAGE_ROOTS_PER_CHUNK (10,000) and STORAGE_ROOTS_PER_TASK (300) constants
  • Refactored request_storage_ranges to process storage roots in chunks rather than all at once
  • Added new process_storage_chunk method to handle individual chunks

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment on lines +1355 to +1365
let task_span = STORAGE_ROOTS_PER_TASK.min(STORAGE_ROOTS_PER_CHUNK);
// how many fully-populated task_span slices fit in
let task_partition_count = total_roots.div_ceil(task_span);

// list of tasks to be executed
// Types are (start_index, end_index, starting_hash)
// NOTE: end_index is NOT inclusive

let mut tasks_queue_not_started = VecDeque::<StorageTask>::new();
for i in 0..chunk_count {
let chunk_start = chunk_size * i;
let chunk_end = (chunk_start + chunk_size).min(accounts_by_root_hash.len());
for i in 0..task_partition_count {
let chunk_start = task_span * i;
let chunk_end = ((i + 1) * task_span).min(total_roots);
Copy link
Preview

Copilot AI Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The variable name task_span is unclear. Consider renaming it to roots_per_task or task_size to better indicate it represents the number of storage roots processed per task.

Copilot uses AI. Check for mistakes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
L1 Ethereum client
Projects
Status: In Review
Development

Successfully merging this pull request may close these issues.

1 participant