-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-4516] Avoid allocating Netty PooledByteBufAllocators unnecessarily #3465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…tors Turns out we are allocating an allocator pool for every TransportClient (which means that the number increases with the number of nodes in the cluster), when really we should just reuse one for all clients. This patch, as expected, massively decreases off-heap memory allocation, and appears to make allocation only proportional to the number of cores.
|
This should be merged into 1.2, as it should simply reduce memory issues in containerized environments. |
|
LGTM. |
|
Test build #23866 has started for PR 3465 at commit
|
|
Thanks aaron, waiting on this to cut rc1 |
|
Test build #23866 has finished for PR 3465 at commit
|
|
Test PASSed. |
|
thanks aaron. that 's very great. |
|
Thanks aaron, it's great. |
…rily Turns out we are allocating an allocator pool for every TransportClient (which means that the number increases with the number of nodes in the cluster), when really we should just reuse one for all clients. This patch, as expected, greatly decreases off-heap memory allocation, and appears to make allocation only proportional to the number of cores. Author: Aaron Davidson <[email protected]> Closes apache#3465 from aarondav/fewer-pools and squashes the following commits: 36c49da [Aaron Davidson] [SPARK-4516] Avoid allocating unnecessarily Netty PooledByteBufAllocators (cherry picked from commit 346bc17) Signed-off-by: Patrick Wendell <[email protected]>
|
Merging into master and branch-1.2. |
|
thanks |
Turns out we are allocating an allocator pool for every TransportClient (which means that the number increases with the number of nodes in the cluster), when really we should just reuse one for all clients.
This patch, as expected, greatly decreases off-heap memory allocation, and appears to make allocation only proportional to the number of cores.