Skip to content

Conversation

@XuTingjun
Copy link
Contributor

I add some configurations below.
spark.yarn.am.cores/SPARK_MASTER_CORES/SPARK_DRIVER_CORES for yarn-client mode;
spark.driver.cores for yarn-cluster mode.

@XuTingjun XuTingjun changed the title specify AM core in yarn-client and yarn-cluster mode [SPARK-1507][YARN]specify num of cores for AM Dec 26, 2014
@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@sryza
Copy link
Contributor

sryza commented Dec 26, 2014

SPARK_MASTER_CORES uses "master" incorrectly. The only reason we have a SPARK_MASTER_MEMORY was to preserve backwards capability.

This patch also still appears to use "--driver-cores" and SPARK_DRIVER_CORES to set the AM memory in yarn-client mode, which we talked about avoiding on the previous PR.

@XuTingjun
Copy link
Contributor Author

@sryza, I am not agree with you. I only add the below code into cluster mode. So the "--driver-cores" will not work in client mode.
OptionAssigner(args.driverCores, YARN, CLUSTER, clOption = "--driver-cores"),

@XuTingjun
Copy link
Contributor Author

@andrewor14

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to add in a deprecated "--master-cores" property

@sryza
Copy link
Contributor

sryza commented Jan 5, 2015

@XuTingjun ah, that's correct. Looking more closely, my confusion was stemming from some existing weirdness, which is that setting the "spark.driver.memory" property will set the application master memory even in client mode. Also, that passing "--driver-memory" as part of the YARN code's ClientArguments (but not to spark-submit) will do this as well.

My opinion is that we should fix those, though I suppose it could be argued that it breaks backwards compatibility?

@XuTingjun
Copy link
Contributor Author

@sryza, do you mean "spark.driver.memory" works in yarn-client and yarn-cluster mode, so we should use one configuration maybe named "spark.driver.cores" to set am cores in yarn-client and yarn-cluster mode?

@sryza
Copy link
Contributor

sryza commented Jan 5, 2015

I think the best thing would be to split spark.driver.memory into spark.driver.memory and spark.yarn.am.memory, and to have the latter only work for the yarn-client AM.

@XuTingjun
Copy link
Contributor Author

Yearh, I agree with you. Later I will fix this. Thanks @sryza

@XuTingjun
Copy link
Contributor Author

@sryza, I have splited spark.driver.memory into spark.driver.memory and spark.yarn.am.memory. Please have a look.

@WangTaoTheTonic
Copy link
Contributor

Sorry but I have already filed a PR that splits spark.driver.memory in #3607.

Could you please check if anyone already did the same work first?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indent

@andrewor14
Copy link
Contributor

Hi @XuTingjun there seems to be a certain degree of overlap with work in #3607. Also, my concern with this PR is that it conflates "driver" and "AM" in a few places. For instance, --driver-cores should never configure the number of cores used by the AM in client mode.

@XuTingjun XuTingjun closed this Jan 8, 2015
asfgit pushed a commit that referenced this pull request Jan 16, 2015
Based on top of changes in #3806.

https://issues.apache.org/jira/browse/SPARK-1507

`--driver-cores` and `spark.driver.cores` for all cluster modes and `spark.yarn.am.cores` for yarn client mode.

Author: WangTaoTheTonic <[email protected]>
Author: WangTao <[email protected]>

Closes #4018 from WangTaoTheTonic/SPARK-1507 and squashes the following commits:

01419d3 [WangTaoTheTonic] amend the args name
b255795 [WangTaoTheTonic] indet thing
d86557c [WangTaoTheTonic] some comments amend
43c9392 [WangTao] fix compile error
b39a100 [WangTao] specify # cores for ApplicationMaster
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants