Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
f6fdd6a
Spark on Kubernetes - basic scheduler backend
foxish Sep 15, 2017
75e31a9
Adding to modules.py and SparkBuild.scala
foxish Oct 17, 2017
cf82b21
Exclude from unidoc, update travis
foxish Oct 17, 2017
488c535
Address a bunch of style and other comments
foxish Oct 17, 2017
82b79a7
Fix some style concerns
foxish Oct 18, 2017
c052212
Clean up YARN constants, unit test updates
foxish Oct 20, 2017
c565c9f
Couple of more style comments
foxish Oct 20, 2017
2fb596d
Address CR comments.
mccheah Oct 25, 2017
992acbe
Extract initial executor count to utils class
mccheah Oct 25, 2017
b0a5839
Fix scalastyle
mccheah Oct 25, 2017
a4f9797
Fix more scalastyle
mccheah Oct 25, 2017
2b5dcac
Pin down app ID in tests. Fix test style.
mccheah Oct 26, 2017
018f4d8
Address comments.
mccheah Nov 1, 2017
4b32134
Various fixes to the scheduler
mccheah Nov 1, 2017
6cf4ed7
Address comments
mccheah Nov 4, 2017
1f271be
Update fabric8 client version to 3.0.0
foxish Nov 13, 2017
71a971f
Addressed more comments
liyinan926 Nov 13, 2017
0ab9ca7
One more round of comments
liyinan926 Nov 14, 2017
7f14b71
Added a comment regarding how failed executor pods are handled
liyinan926 Nov 15, 2017
7afce3f
Addressed more comments
liyinan926 Nov 21, 2017
b75b413
Fixed Scala style error
liyinan926 Nov 21, 2017
3b587b4
Removed unused parameter in parsePrefixedKeyValuePairs
liyinan926 Nov 22, 2017
cb12fec
Another round of comments
liyinan926 Nov 22, 2017
ae396cf
Addressed latest comments
liyinan926 Nov 27, 2017
f8e3249
Addressed comments around licensing on new dependencies
liyinan926 Nov 27, 2017
a44c29e
Fixed unit tests and made maximum executor lost reason checks configu…
liyinan926 Nov 27, 2017
4bed817
Removed default value for executor Docker image
liyinan926 Nov 27, 2017
c386186
Close the executor pod watcher before deleting the executor pods
liyinan926 Nov 27, 2017
b85cfc4
Addressed more comments
liyinan926 Nov 28, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Addressed more comments
  • Loading branch information
liyinan926 committed Nov 28, 2017
commit b85cfc4038c8de9340b78d10edf88ab76dd90ba3
2 changes: 1 addition & 1 deletion docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -1397,7 +1397,7 @@ Apart from these, the following properties are also available, and may be useful
</tr>
<tr>
<td><code>spark.scheduler.minRegisteredResourcesRatio</code></td>
<td>2.3.0 for KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode</td>
<td>0.8 for KUBERNETES mode; 0.8 for YARN mode; 0.0 for standalone mode and Mesos coarse-grained mode</td>
<td>
The minimum ratio of registered resources (registered resources / total expected resources)
(resources are executors in yarn mode and Kubernetes mode, CPU cores in standalone mode and Mesos coarsed-grained
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@ private[spark] object Config extends Logging {

val KUBERNETES_NAMESPACE =
ConfigBuilder("spark.kubernetes.namespace")
.doc("The namespace that will be used for running the driver and executor pods. When using" +
" spark-submit in cluster mode, this can also be passed to spark-submit via the" +
" --kubernetes-namespace command line argument.")
.doc("The namespace that will be used for running the driver and executor pods. When using " +
"spark-submit in cluster mode, this can also be passed to spark-submit via the " +
"--kubernetes-namespace command line argument.")
.stringConf
.createWithDefault("default")

val EXECUTOR_DOCKER_IMAGE =
ConfigBuilder("spark.kubernetes.executor.docker.image")
.doc("Docker image to use for the executors. Specify this using the standard Docker tag" +
" format.")
.doc("Docker image to use for the executors. Specify this using the standard Docker tag " +
"format.")
.stringConf
.createOptional

Expand All @@ -56,10 +56,10 @@ private[spark] object Config extends Logging {

val KUBERNETES_SERVICE_ACCOUNT_NAME =
ConfigBuilder(s"$APISERVER_AUTH_DRIVER_CONF_PREFIX.serviceAccountName")
.doc("Service account that is used when running the driver pod. The driver pod uses" +
" this service account when requesting executor pods from the API server. If specific" +
" credentials are given for the driver pod to use, the driver will favor" +
" using those credentials instead.")
.doc("Service account that is used when running the driver pod. The driver pod uses " +
"this service account when requesting executor pods from the API server. If specific " +
"credentials are given for the driver pod to use, the driver will favor " +
"using those credentials instead.")
.stringConf
.createOptional

Expand All @@ -68,9 +68,9 @@ private[spark] object Config extends Logging {
// based on the executor memory.
val KUBERNETES_EXECUTOR_MEMORY_OVERHEAD =
ConfigBuilder("spark.kubernetes.executor.memoryOverhead")
.doc("The amount of off-heap memory (in megabytes) to be allocated per executor. This" +
" is memory that accounts for things like VM overheads, interned strings, other native" +
" overheads, etc. This tends to grow with the executor size. (typically 6-10%).")
.doc("The amount of off-heap memory (in megabytes) to be allocated per executor. This " +
"is memory that accounts for things like VM overheads, interned strings, other native " +
"overheads, etc. This tends to grow with the executor size. (typically 6-10%).")
.bytesConf(ByteUnit.MiB)
.createOptional
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about driver memory overhead ?
I see that mesos does not support it, while yarn does - is it relevant here ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect !


Expand Down Expand Up @@ -117,7 +117,7 @@ private[spark] object Config extends Logging {
.intConf
.checkValue(value => value > 0, "Maximum attempts of checks of executor lost reason " +
"must be a positive integer")
.createWithDefault(5)
.createWithDefault(10)

val KUBERNETES_NODE_SELECTOR_PREFIX = "spark.kubernetes.node.selector."
}
Original file line number Diff line number Diff line change
Expand Up @@ -344,9 +344,9 @@ private[spark] class KubernetesClusterSchedulerBackend(
podsWithKnownExitReasons.put(pod.getMetadata.getName, executorExitReason)

if (!disconnectedPodsByExecutorIdPendingRemoval.containsKey(executorId)) {
log.warn(s"Executor with id $executorId was not marked as disconnected, but the" +
s" watch received an event of type $action for this executor. The executor may" +
" have failed to start in the first place and never registered with the driver.")
log.warn(s"Executor with id $executorId was not marked as disconnected, but the " +
s"watch received an event of type $action for this executor. The executor may " +
"have failed to start in the first place and never registered with the driver.")
}
disconnectedPodsByExecutorIdPendingRemoval.put(executorId, pod)

Expand Down Expand Up @@ -388,8 +388,8 @@ private[spark] class KubernetesClusterSchedulerBackend(
// container was probably actively killed by the driver.
if (isPodAlreadyReleased(pod)) {
ExecutorExited(containerExitStatus, exitCausedByApp = false,
s"Container in pod ${pod.getMetadata.getName} exited from explicit termination" +
" request.")
s"Container in pod ${pod.getMetadata.getName} exited from explicit termination " +
"request.")
} else {
val containerExitReason = s"Pod ${pod.getMetadata.getName}'s executor container " +
s"exited with exit status code $containerExitStatus."
Expand Down