Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,11 @@ private[deploy] class SparkSubmitArguments(args: Seq[String], env: Map[String, S
&& Try(JavaUtils.byteStringAsBytes(executorMemory)).getOrElse(-1L) <= 0) {
SparkSubmit.printErrorAndExit("Executor Memory cores must be a positive number")
}
if (executorCores != null && Try(executorCores.toInt).getOrElse(-1) <= 0) {
// Kubernetes mode allows fractional values for spark.executor.cores so bypass this check in
// the Kubernetes mode.
if (!master.startsWith("k8s")
&& executorCores != null
&& Try(executorCores.toInt).getOrElse(-1) <= 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be nested if..() statements?
That's more readable IMO but not sure what's considered more idiomatic here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the intent here is just to check that it's positive, then just try to get it as a double and check it. This doesn't require making the logic dependent on a particular resource manager. Of course it relaxes the condition from what it is now, which would reject input like "1.5". But it would reject it not with a graceful exit but an exception, so I kind of think it's not what this code intends to catch anyway right now. (It would cause an error pretty quickly in YARN et al anyway)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, done.

SparkSubmit.printErrorAndExit("Executor cores must be a positive number")
}
if (totalExecutorCores != null && Try(totalExecutorCores.toInt).getOrElse(-1) <= 0) {
Expand Down
5 changes: 3 additions & 2 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -1220,14 +1220,15 @@ Apart from these, the following properties are also available, and may be useful
<tr>
<td><code>spark.executor.cores</code></td>
<td>
1 in YARN mode, all the available cores on the worker in
1 in YARN and Kubernetes modes, all the available cores on the worker in
standalone and Mesos coarse-grained modes.
</td>
<td>
The number of cores to use on each executor.

In standalone and Mesos coarse-grained modes, for more detail, see
<a href="spark-standalone.html#Executors Scheduling">this description</a>.
<a href="spark-standalone.html#Executors Scheduling">this description</a>. In Kubernetes mode,
a fractional value can be used, e.g., 0.1 or 100m.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100m isn't fractional though?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right. Fixed.

</td>
</tr>
<tr>
Expand Down