-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-10087] [CORE] [BRANCH-1.5] Disable spark.shuffle.reduceLocality.enabled by default. #8296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-10087] [CORE] [BRANCH-1.5] Disable spark.shuffle.reduceLocality.enabled by default. #8296
Conversation
|
LGTM |
|
Test build #41194 has finished for PR 8296 at commit
|
|
Test build #41208 timed out for PR 8296 at commit |
|
retest this please. |
|
Test build #41235 timed out for PR 8296 at commit |
|
Test build #1669 has finished for PR 8296 at commit
|
|
Test build #1671 has finished for PR 8296 at commit
|
|
I've merged this. |
…y.enabled by default. https://issues.apache.org/jira/browse/SPARK-10087 In some cases, when spark.shuffle.reduceLocality.enabled is enabled, we are scheduling all reducers to the same executor (the cluster has plenty of resources). Changing spark.shuffle.reduceLocality.enabled to false resolve the problem. Comments of #8280 provide more details of the symptom of this issue. This PR changes the default setting of `spark.shuffle.reduceLocality.enabled` to `false` for branch 1.5. Author: Yin Huai <[email protected]> Closes #8296 from yhuai/setNumPartitionsCorrectly-branch1.5.
…y.enabled by default. https://issues.apache.org/jira/browse/SPARK-10087 In some cases, when spark.shuffle.reduceLocality.enabled is enabled, we are scheduling all reducers to the same executor (the cluster has plenty of resources). Changing spark.shuffle.reduceLocality.enabled to false resolve the problem. Comments of apache#8280 provide more details of the symptom of this issue. This PR changes the default setting of `spark.shuffle.reduceLocality.enabled` to `false` for branch 1.5. Author: Yin Huai <[email protected]> Closes apache#8296 from yhuai/setNumPartitionsCorrectly-branch1.5.
https://issues.apache.org/jira/browse/SPARK-10087
In some cases, when spark.shuffle.reduceLocality.enabled is enabled, we are scheduling all reducers to the same executor (the cluster has plenty of resources). Changing spark.shuffle.reduceLocality.enabled to false resolve the problem.
Comments of #8280 provide more details of the symptom of this issue.
This PR changes the default setting of
spark.shuffle.reduceLocality.enabledtofalsefor branch 1.5.