-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-40699][DOCS] Supplement undocumented yarn configurations in documentation #38150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
@@ -486,6 +486,20 @@ To use a custom metrics.properties for the application master and executors, upd | |||||||||||||||||
| </td> | ||||||||||||||||||
| <td>3.3.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| <tr> | ||||||||||||||||||
| <td><code>spark.yarn.am.tokenConfRegex</code></td> | ||||||||||||||||||
| <td>(none)</td> | ||||||||||||||||||
| <td> | ||||||||||||||||||
| This config is only supported when Hadoop version is 2.9+ or 3.x (e.g., when using the Hadoop 3.x profile). | ||||||||||||||||||
| The value of this config is a regex expression used to grep a list of config entries from the job's configuration file (e.g., hdfs-site.xml) | ||||||||||||||||||
| and send to RM, which uses them when renewing delegation tokens. A typical use case of this feature is to support delegation | ||||||||||||||||||
| tokens in an environment where a YARN cluster needs to talk to multiple downstream HDFS clusters, where the YARN RM may not have configs | ||||||||||||||||||
| (e.g., dfs.nameservices, dfs.ha.namenodes.*, dfs.namenode.rpc-address.*) to connect to these clusters. | ||||||||||||||||||
| In this scenario, Spark users can specify the config value to be <code>^dfs.nameservices$|^dfs.namenode.rpc-address.*$|^dfs.ha.namenodes.*$</code> to parse | ||||||||||||||||||
| these HDFS configs from the job's local configuration files. This config is very similar to <code>mapreduce.job.send-token-conf</code>. Please check YARN-5910 for more details. | ||||||||||||||||||
| </td> | ||||||||||||||||||
| <td>3.3.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| <tr> | ||||||||||||||||||
| <td><code>spark.yarn.executor.failuresValidityInterval</code></td> | ||||||||||||||||||
| <td>(none)</td> | ||||||||||||||||||
|
|
@@ -632,6 +646,33 @@ To use a custom metrics.properties for the application master and executors, upd | |||||||||||||||||
| </td> | ||||||||||||||||||
| <td>0.9.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| <tr> | ||||||||||||||||||
| <td><code>spark.yarn.clientLaunchMonitorInterval</code></td> | ||||||||||||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala Lines 228 to 233 in c4cd193
|
||||||||||||||||||
| <td><code>1s</code></td> | ||||||||||||||||||
| <td> | ||||||||||||||||||
| Interval between requests for status the client mode AM when starting the app. | ||||||||||||||||||
| </td> | ||||||||||||||||||
| <td>2.3.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| <tr> | ||||||||||||||||||
| <td><code>spark.yarn.includeDriverLogsLink</code></td> | ||||||||||||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala Lines 235 to 242 in c4cd193
|
||||||||||||||||||
| <td><code>false</code></td> | ||||||||||||||||||
| <td> | ||||||||||||||||||
| In cluster mode, whether the client application report includes links to the driver | ||||||||||||||||||
| container's logs. This requires polling the ResourceManager's REST API, so it | ||||||||||||||||||
| places some additional load on the RM. | ||||||||||||||||||
| </td> | ||||||||||||||||||
| <td>3.1.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| <tr> | ||||||||||||||||||
| <td><code>spark.yarn.unmanagedAM.enabled</code></td> | ||||||||||||||||||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala Lines 346 to 351 in c4cd193
|
||||||||||||||||||
| <td><code>false</code></td> | ||||||||||||||||||
| <td> | ||||||||||||||||||
| In client mode, whether to launch the Application Master service as part of the client | ||||||||||||||||||
| using unmanaged am. | ||||||||||||||||||
| </td> | ||||||||||||||||||
| <td>3.0.0</td> | ||||||||||||||||||
| </tr> | ||||||||||||||||||
| </table> | ||||||||||||||||||
|
|
||||||||||||||||||
| #### Available patterns for SHS custom executor log URL | ||||||||||||||||||
|
|
||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spark/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/config.scala
Lines 81 to 96 in c4cd193