Skip to content

Conversation

@twoeths
Copy link
Contributor

@twoeths twoeths commented Dec 5, 2025

Motivation

Description

  • use @chainsafe/snappy-wasm for all topics but with Buffer.alloc()
  • this is deployed to feat2

@twoeths twoeths requested a review from a team as a code owner December 5, 2025 07:52
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @twoeths, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical performance issue related to excessive garbage collection by modifying how memory buffers are allocated within the snappy-wasm compression and decompression logic. By transitioning from Buffer.allocUnsafe() to Buffer.alloc(), the change ensures that newly allocated memory is zero-filled, which helps prevent potential memory-related bugs and significantly reduces GC pressure. This adjustment is expected to improve the overall stability and performance of the system by optimizing memory management during data transformation.

Highlights

  • GC Performance Improvement: Replaced Buffer.allocUnsafe() with Buffer.alloc() in the DataTransformSnappy class to mitigate high MarkSweepCompact garbage collection activity observed on the main thread of nodes, specifically addressing issues similar to feat: use snappy-wasm (#6483) #8647.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a performance issue related to garbage collection by replacing Buffer.allocUnsafe() with Buffer.alloc() when handling snappy compression and decompression. The changes are correct and effectively resolve the reported problem. My review includes suggestions to improve the new code comments for better clarity and long-term maintainability, and also points out an additional security benefit of the change in one of the cases.

@twoeths
Copy link
Contributor Author

twoeths commented Dec 5, 2025

this fixes the issue specified on PR-8647

  • attestation job times are the same to unstable
Screenshot 2025-12-05 at 17 26 46
  • also the memory on main thread + network thread
Screenshot 2025-12-05 at 17 28 57

@twoeths
Copy link
Contributor Author

twoeths commented Dec 8, 2025

there is not clear differences between metrics of this PR vs #8670
except for Event loop lag NETWORK WORKER of this PR on hoodi sas node

Screenshot 2025-12-08 at 15 13 34

set_immediate is way smaller than this metric of #8670 (feat3)
Screenshot 2025-12-08 at 15 14 17

@codecov
Copy link

codecov bot commented Dec 8, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 52.03%. Comparing base (a6b3c51) to head (7fc5ed0).
⚠️ Report is 1 commits behind head on cayman/snappy.

Additional details and impacted files
@@              Coverage Diff               @@
##           cayman/snappy    #8671   +/-   ##
==============================================
  Coverage          52.03%   52.03%           
==============================================
  Files                848      848           
  Lines              65809    65809           
  Branches            4809     4809           
==============================================
  Hits               34241    34241           
  Misses             31499    31499           
  Partials              69       69           
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@wemeetagain wemeetagain merged commit 9db8579 into cayman/snappy Dec 9, 2025
20 of 23 checks passed
@wemeetagain wemeetagain deleted the te/cayman/snappy_Buffer_alloc branch December 9, 2025 03:25
wemeetagain pushed a commit that referenced this pull request Dec 16, 2025
**Motivation**

- improve memory by transferirng gossipsub message data from network
thread to the main thread
- In snappy decompression in #8647 we had to do `Buffer.alloc()` instead
of `Buffer.allocUnsafe()`. We don't have to feel bad about that because
`Buffer.allocUnsafe()` does not work with this PR, and we don't waste
any memory.

**Description**

- use `transferList` param when posting messages from network thread to
the main thread

part of #8629

**Testing**
I've tested this on `feat2` for 3 days, the previous branch was #8671 so
it's basically the current stable, does not see significant improvement
but some good data for different nodes
- no change on 1k or `novc`
- on hoodi `sas` node we have better memory there on main thread with
same mesh peers, same memory on network thread

<img width="851" height="511" alt="Screenshot 2025-12-12 at 11 05 27"
src="https://github.com/user-attachments/assets/8d7b2c2f-8213-4f89-87e0-437d016bc24a"
/>

- on mainnnet `sas` node, we have better memory on network thread, a
little bit worse on the main thread
<img width="854" height="504" alt="Screenshot 2025-12-12 at 11 08 42"
src="https://github.com/user-attachments/assets/7e638149-2dbe-4c7e-849c-ef78f6ff4d6f"
/>

- but for this mainnet node, the most interesting metric is `forward msg
avg peers`, we're faster than majority of them

<img width="1378" height="379" alt="Screenshot 2025-12-12 at 11 11 00"
src="https://github.com/user-attachments/assets/3ba5eeaa-5a11-4cad-adfa-1e0f68a81f16"
/>

---------

Co-authored-by: Tuyen Nguyen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants