Skip to content

Conversation

@dcoliversun
Copy link
Contributor

What changes were proposed in this pull request?

This PR aims to supplement undocumented yarn configuration in documentation.

Why are the changes needed?

Help users to confirm yarn configurations through documentation instead of code.

Does this PR introduce any user-facing change?

Yes, more configurations in documentation.

How was this patch tested?

Pass the GA.

@github-actions github-actions bot added the DOCS label Oct 7, 2022
Copy link
Contributor Author

@dcoliversun dcoliversun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @HyukjinKwon @dongjoon-hyun
It would be good if you have a time to review this PR

<td>3.3.0</td>
</tr>
<tr>
<td><code>spark.yarn.am.tokenConfRegex</code></td>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private[spark] val AM_TOKEN_CONF_REGEX =
ConfigBuilder("spark.yarn.am.tokenConfRegex")
.doc("This config is only supported when Hadoop version is 2.9+ or 3.x (e.g., when using " +
"the Hadoop 3.x profile). The value of this config is a regex expression used to grep a " +
"list of config entries from the job's configuration file (e.g., hdfs-site.xml) and send " +
"to RM, which uses them when renewing delegation tokens. A typical use case of this " +
"feature is to support delegation tokens in an environment where a YARN cluster needs to " +
"talk to multiple downstream HDFS clusters, where the YARN RM may not have configs " +
"(e.g., dfs.nameservices, dfs.ha.namenodes.*, dfs.namenode.rpc-address.*) to connect to " +
"these clusters. In this scenario, Spark users can specify the config value to be " +
"'^dfs.nameservices$|^dfs.namenode.rpc-address.*$|^dfs.ha.namenodes.*$' to parse " +
"these HDFS configs from the job's local configuration files. This config is very " +
"similar to 'mapreduce.job.send-token-conf'. Please check YARN-5910 for more details.")
.version("3.3.0")
.stringConf
.createOptional

<td>0.9.0</td>
</tr>
<tr>
<td><code>spark.yarn.clientLaunchMonitorInterval</code></td>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private[spark] val CLIENT_LAUNCH_MONITOR_INTERVAL =
ConfigBuilder("spark.yarn.clientLaunchMonitorInterval")
.doc("Interval between requests for status the client mode AM when starting the app.")
.version("2.3.0")
.timeConf(TimeUnit.MILLISECONDS)
.createWithDefaultString("1s")

<td>2.3.0</td>
</tr>
<tr>
<td><code>spark.yarn.includeDriverLogsLink</code></td>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private[spark] val CLIENT_INCLUDE_DRIVER_LOGS_LINK =
ConfigBuilder("spark.yarn.includeDriverLogsLink")
.doc("In cluster mode, whether the client application report includes links to the driver "
+ "container's logs. This requires polling the ResourceManager's REST API, so it "
+ "places some additional load on the RM.")
.version("3.1.0")
.booleanConf
.createWithDefault(false)

<td>3.1.0</td>
</tr>
<tr>
<td><code>spark.yarn.unmanagedAM.enabled</code></td>
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

private[spark] val YARN_UNMANAGED_AM = ConfigBuilder("spark.yarn.unmanagedAM.enabled")
.doc("In client mode, whether to launch the Application Master service as part of the client " +
"using unmanaged am.")
.version("3.0.0")
.booleanConf
.createWithDefault(false)

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@srowen
Copy link
Member

srowen commented Oct 9, 2022

Merged to master

@srowen srowen closed this in 51e8ca3 Oct 9, 2022
@dcoliversun dcoliversun deleted the SPARK-40699 branch October 10, 2022 01:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants