Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
f6fdd6a
Spark on Kubernetes - basic scheduler backend
foxish Sep 15, 2017
75e31a9
Adding to modules.py and SparkBuild.scala
foxish Oct 17, 2017
cf82b21
Exclude from unidoc, update travis
foxish Oct 17, 2017
488c535
Address a bunch of style and other comments
foxish Oct 17, 2017
82b79a7
Fix some style concerns
foxish Oct 18, 2017
c052212
Clean up YARN constants, unit test updates
foxish Oct 20, 2017
c565c9f
Couple of more style comments
foxish Oct 20, 2017
2fb596d
Address CR comments.
mccheah Oct 25, 2017
992acbe
Extract initial executor count to utils class
mccheah Oct 25, 2017
b0a5839
Fix scalastyle
mccheah Oct 25, 2017
a4f9797
Fix more scalastyle
mccheah Oct 25, 2017
2b5dcac
Pin down app ID in tests. Fix test style.
mccheah Oct 26, 2017
018f4d8
Address comments.
mccheah Nov 1, 2017
4b32134
Various fixes to the scheduler
mccheah Nov 1, 2017
6cf4ed7
Address comments
mccheah Nov 4, 2017
1f271be
Update fabric8 client version to 3.0.0
foxish Nov 13, 2017
71a971f
Addressed more comments
liyinan926 Nov 13, 2017
0ab9ca7
One more round of comments
liyinan926 Nov 14, 2017
7f14b71
Added a comment regarding how failed executor pods are handled
liyinan926 Nov 15, 2017
7afce3f
Addressed more comments
liyinan926 Nov 21, 2017
b75b413
Fixed Scala style error
liyinan926 Nov 21, 2017
3b587b4
Removed unused parameter in parsePrefixedKeyValuePairs
liyinan926 Nov 22, 2017
cb12fec
Another round of comments
liyinan926 Nov 22, 2017
ae396cf
Addressed latest comments
liyinan926 Nov 27, 2017
f8e3249
Addressed comments around licensing on new dependencies
liyinan926 Nov 27, 2017
a44c29e
Fixed unit tests and made maximum executor lost reason checks configu…
liyinan926 Nov 27, 2017
4bed817
Removed default value for executor Docker image
liyinan926 Nov 27, 2017
c386186
Close the executor pod watcher before deleting the executor pods
liyinan926 Nov 27, 2017
b85cfc4
Addressed more comments
liyinan926 Nov 28, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Various fixes to the scheduler
- Move Kubernetes client calls out of synchronized blocks to prevent
locking with HTTP connection lag
- Fix a bug where pods that fail to launch through the APi are not
retried
- Remove the map from executor pod name to executor ID by using the
Pod's labels to get the same information without having to track extra
state.
  • Loading branch information
mccheah committed Nov 1, 2017
commit 4b3213422e6e67b11de7b627ad46d4031043be0e
Original file line number Diff line number Diff line change
Expand Up @@ -138,14 +138,14 @@ private[spark] class ExecutorPodFactoryImpl(sparkConf: SparkConf)
new EnvVarBuilder().withName(s"$ENV_JAVA_OPT_PREFIX$index").withValue(opt).build()
}
}.getOrElse(Seq.empty[EnvVar])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this getting used ? I see it getting set, but not used anywhere.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is used in the executor Docker file included in #19717.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, somehow it did not show up in my searches.

val executorEnv = Seq(
val executorEnv = (Seq(
(ENV_EXECUTOR_PORT, executorPort.toString),
(ENV_DRIVER_URL, driverUrl),
// Executor backend expects integral value for executor cores, so round it up to an int.
(ENV_EXECUTOR_CORES, math.ceil(executorCores).toInt.toString),
(ENV_EXECUTOR_MEMORY, executorMemoryString),
(ENV_APPLICATION_ID, applicationId),
(ENV_EXECUTOR_ID, executorId))
(ENV_EXECUTOR_ID, executorId)) ++ executorEnvs)
.map(env => new EnvVarBuilder()
.withName(env._1)
.withValue(env._2)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,12 +50,8 @@ private[spark] class KubernetesClusterSchedulerBackend(

private val EXECUTOR_ID_COUNTER = new AtomicLong(0L)
private val RUNNING_EXECUTOR_PODS_LOCK = new Object
// Indexed by executor IDs
@GuardedBy("RUNNING_EXECUTOR_PODS_LOCK")
private val runningExecutorsToPods = new mutable.HashMap[String, Pod]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you could use GuardedBy instead of comment.

// Indexed by executor pod names
@GuardedBy("RUNNING_EXECUTOR_PODS_LOCK")
private val runningPodsToExecutors = new mutable.HashMap[String, String]
private val executorPodsByIPs = new ConcurrentHashMap[String, Pod]()
private val podsWithKnownExitReasons = new ConcurrentHashMap[String, ExecutorExited]()
private val disconnectedPodsByExecutorIdPendingRemoval = new ConcurrentHashMap[String, Pod]()
Expand Down Expand Up @@ -117,7 +113,6 @@ private[spark] class KubernetesClusterSchedulerBackend(
} else if (currentTotalExpectedExecutors <= runningExecutorsToPods.size) {
logDebug("Maximum allowed executor limit reached. Not scaling up further.")
} else {
val nodeToLocalTaskCount = getNodesWithLocalTaskCounts
for (i <- 0 until math.min(
currentTotalExpectedExecutors - runningExecutorsToPods.size, podAllocationSize)) {
val executorId = EXECUTOR_ID_COUNTER.incrementAndGet().toString
Expand All @@ -127,7 +122,16 @@ private[spark] class KubernetesClusterSchedulerBackend(
driverUrl,
conf.getExecutorEnv,
driverPod,
nodeToLocalTaskCount)
currentNodeToLocalTaskCount)
require(executorPod.getMetadata.getLabels.containsKey(SPARK_EXECUTOR_ID_LABEL),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative to enforcing this contract here is to enforce the contract in some other module. E.g. what about having a trait like so:

trait ExecutorIdLabelContract {
  // Ensures that the returned pod has the executor ID label if it doesn't exist
  def assignRequiredLabels(executorPod: Pod, executorId: String): Pod
  // Gets the executor ID from a pod using the label that is supposed to have been assigned above
  def getExecutorId(executorPod: Pod): String
}

which places the contract in one cohesive place. But for now that seems like over-architecting?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lgtm as is. Getting the info from the pod itself makes sense.

s"Illegal internal state for pod with name ${executorPod.getMetadata.getName} - all" +
s" executor pods must contain the label $SPARK_EXECUTOR_ID_LABEL.")
val resolvedExecutorIdLabel = executorPod.getMetadata.getLabels.get(
SPARK_EXECUTOR_ID_LABEL)
require(resolvedExecutorIdLabel == executorId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the behavior you want when these checks trigger?

Right now this will just stop this periodic task (which is the the behavior of ScheduledExecutorService when a task throws an exception). It won't stop the application or do anything else, so you'll probably end up with a stuck app or some other weird state.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really see a possibility for a successfully constructed executor pod to not have this label with the right value. There's already checks in ExecutorPodFactory that guard against users setting this label. So I don't think the checks here are really necessary. @foxish to confirm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 to @liyinan926's comment. Since we're directly getting the pod from ExecutorPodFactory, the guard is superfluous and basically dead code. @mccheah PTAL.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the checks.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree with removing the checks here. The contract that the labels are present are in ExecutorPodFactory, which is further removed from this class. It would be good for the contract to be explicit here, because we can then catch errors in integration tests and diagnose them quickly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the future code change that these requirements are meant to protect against such that the integration tests fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is purely for preventing future changes from potentially breaking it, why cannot we verify that the contract is held in integration tests so the integration tests will fail if the contract is indeed broken?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends I suppose on how clearly and loudly the integration tests will fail if the contract is not satisfied. I like having the require checks here because we fail fast and explicitly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this depends on how clearly integration tests fail on this. But I still don't think this is a good place for such kind of checks. Also as @vanzin pointed out, failing the checks here only stops the scheduled task without stopping the driver/app so it really does not fail fast nor explicitly in real cases.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. I'd like to consider how we can make the contract explicit and more connected, but this is fine for now.

s"Illegal internal state for pod with name ${executorPod.getMetadata.getName} - all" +
s" executor pods must map the label with key ${SPARK_EXECUTOR_ID_LABEL} to the" +
s" executor's ID. This label mapped instead to: $resolvedExecutorIdLabel.")
executorsToAllocate(executorId) = executorPod
logInfo(
s"Requesting a new executor, total executors is now ${runningExecutorsToPods.size}")
Expand All @@ -143,8 +147,6 @@ private[spark] class KubernetesClusterSchedulerBackend(
case (executorId, attemptedAllocatedExecutor) =>
attemptedAllocatedExecutor.map { successfullyAllocatedExecutor =>
runningExecutorsToPods.put(executorId, successfullyAllocatedExecutor)
runningPodsToExecutors.put(
successfullyAllocatedExecutor.getMetadata.getName, executorId)
}
}
}
Expand All @@ -166,11 +168,12 @@ private[spark] class KubernetesClusterSchedulerBackend(
// We keep around executors that have exit conditions caused by the application. This
// allows them to be debugged later on. Otherwise, mark them as to be deleted from the
// the API server.
if (!executorExited.exitCausedByApp) {
if (executorExited.exitCausedByApp) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not very clear to me when these executors will go away. Will the app eventually clean them up? Will the user have to manually do something?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Executor pods are deleted when stop gets called.

Copy link
Contributor Author

@foxish foxish Nov 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 2 different cases here. If the application succeeds, then as @liyinan926 said, we'll clean up all executors in stop. However, if for some reason, the application fails, then we will keep the failed executors and the failed driver around for debugging - this is the intended behavior here.

There are owner references between the driver pod and executors, such that the driver pod is the root of the ownership graph of all resources associated with a spark application. So, deleting the driver from the API automatically removes all executors. So, post-inspection, the expectation is that the user will delete the driver to simply clean up the application entirely. (We will include this in documentation)

In the future (not in the Spark 2.3 timeframe), the root of the owner reference graph may be a CRD representing a Spark Application which will provide more transparency.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we will keep the failed executors and the failed driver around for debugging

I understand the desire to make debugging easier, but isn't that bad, especially for long running applications? Won't you potentially end up with a bunch of containers holding on to CPU and memory, but no executor actually running on them?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually no, the failed driver/executor containers have already finished running and won't take cpu/memory resources. The pod objects of failed driver/executors, however, are not deleted from the API server so users can use kubectl logs <pod name> or kubectl describe pod <pod name> to check what's going on with the failed pods. So they will take etcd storage but not cpu/memory resources.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, makes sense. It would be good to make the comment explain that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this not be done only when in debug mode ?
We will keep references to failed executors - and for long running jobs, this can effectively be a memory leak if executors sporadically fail over time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We delete the in-memory executor object from runningExecutorsToPods in both cases. If executorExited.exitCausedByApp is true, we just don't delete the executor object from the Kubernetes API server. Like being explained above, failed/terminated executor pods don't take cpu/memory resources, although they are kept around in etcd so users can check what's going on through the kubectl logs and kubectl describe pod commands. Hope this addresses your concern.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was under the impression that client api has overhead to maintain the state of the failed executors (not the scheduler code - where it is removed from). If the references are maintained external to spark application, and within kubernetes infrastructure, this should be fine.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the executor pod objects are persisted in etcd until they are explicitly deleted by the user. So in that sense, such references are external within Kubernetes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds great, as long as spark driver memory is not getting impacted by having these around; it should be fine.
Thanks for clarifying !

logInfo(s"Executor $executorId exited because of the application.")
deleteExecutorFromDataStructures(executorId)
} else {
logInfo(s"Executor $executorId failed because of a framework error.")
deleteExecutorFromClusterAndDataStructures(executorId)
} else {
logInfo(s"Executor $executorId exited because of the application.")
}
}
}
Expand All @@ -187,19 +190,20 @@ private[spark] class KubernetesClusterSchedulerBackend(
}

def deleteExecutorFromClusterAndDataStructures(executorId: String): Unit = {
deleteExecutorFromDataStructures(executorId)
.foreach(pod => kubernetesClient.pods().delete(pod))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.foreach { pod => ... } or .foreach(...delete(_))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}

def deleteExecutorFromDataStructures(executorId: String): Option[Pod] = {
disconnectedPodsByExecutorIdPendingRemoval.remove(executorId)
executorReasonCheckAttemptCounts -= executorId
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove from podsWithKnownExitReasons ?

podsWithKnownExitReasons -= executorId
val maybeExecutorPodToDelete = RUNNING_EXECUTOR_PODS_LOCK.synchronized {
runningExecutorsToPods.remove(executorId).map { pod =>
runningPodsToExecutors.remove(pod.getMetadata.getName)
pod
}.orElse {
podsWithKnownExitReasons.remove(executorId)
RUNNING_EXECUTOR_PODS_LOCK.synchronized {
runningExecutorsToPods.remove(executorId).orElse {
logWarning(s"Unable to remove pod for unknown executor $executorId")
None
}
}
maybeExecutorPodToDelete.foreach(pod => kubernetesClient.pods().delete(pod))
}
}

Expand Down Expand Up @@ -231,14 +235,10 @@ private[spark] class KubernetesClusterSchedulerBackend(
super.stop()

// then delete the executor pods
// TODO investigate why Utils.tryLogNonFatalError() doesn't work in this context.
// When using Utils.tryLogNonFatalError some of the code fails but without any logs or
// indication as to why.
Utils.tryLogNonFatalError {
val executorPodsToDelete = RUNNING_EXECUTOR_PODS_LOCK.synchronized {
val runningExecutorPodsCopy = Seq(runningExecutorsToPods.values.toSeq: _*)
runningExecutorsToPods.clear()
runningPodsToExecutors.clear()
runningExecutorPodsCopy
}
kubernetesClient.pods().delete(executorPodsToDelete: _*)
Expand Down Expand Up @@ -288,7 +288,6 @@ private[spark] class KubernetesClusterSchedulerBackend(
val maybeRemovedExecutor = runningExecutorsToPods.remove(executor)
maybeRemovedExecutor.foreach { executorPod =>
disconnectedPodsByExecutorIdPendingRemoval.put(executor, executorPod)
runningPodsToExecutors.remove(executorPod.getMetadata.getName)
podsToDelete += executorPod
}
if (maybeRemovedExecutor.isEmpty) {
Expand All @@ -300,11 +299,6 @@ private[spark] class KubernetesClusterSchedulerBackend(
true
}

def getExecutorPodByIP(podIP: String): Option[Pod] = {
val pod = executorPodsByIPs.get(podIP)
Option(pod)
}

private class ExecutorPodsWatcher extends Watcher[Pod] {

private val DEFAULT_CONTAINER_FAILURE_EXIT_STATUS = -1
Expand All @@ -316,21 +310,33 @@ private[spark] class KubernetesClusterSchedulerBackend(
val clusterNodeName = pod.getSpec.getNodeName
logInfo(s"Executor pod $pod ready, launched at $clusterNodeName as IP $podIP.")
executorPodsByIPs.put(podIP, pod)
} else if ((action == Action.MODIFIED && pod.getMetadata.getDeletionTimestamp != null) ||
action == Action.DELETED || action == Action.ERROR) {
} else if (action == Action.DELETED || action == Action.ERROR) {
val executorId = pod.getMetadata.getLabels.get(SPARK_EXECUTOR_ID_LABEL)
require(executorId != null, "Unexpected pod metadata; expected all executor pods" +
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: why not have an util function to fetch executorId, then we can move all the check logics there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

s" to have label $SPARK_EXECUTOR_ID_LABEL.")
val podName = pod.getMetadata.getName
val podIP = pod.getStatus.getPodIP
logDebug(s"Executor pod $podName at IP $podIP was at $action.")
if (podIP != null) {
executorPodsByIPs.remove(podIP)
}
if (action == Action.ERROR) {
val executorExitReason = if (action == Action.ERROR) {
logWarning(s"Received pod $podName exited event. Reason: " + pod.getStatus.getReason)
handleErroredPod(pod)
executorExitReasonOnError(pod)
} else if (action == Action.DELETED) {
logWarning(s"Received delete pod $podName event. Reason: " + pod.getStatus.getReason)
handleDeletedPod(pod)
executorExitReasonOnDelete(pod)
} else {
throw new IllegalStateException(
s"Unknown action that should only be DELETED or ERROR: $action")
}
podsWithKnownExitReasons.put(pod.getMetadata.getName, executorExitReason)
if (!disconnectedPodsByExecutorIdPendingRemoval.containsKey(executorId)) {
log.warn(s"Executor with id $executorId was not marked as disconnected, but the" +
s" watch received an event of type $action for this executor. The executor may" +
s" have failed to start in the first place and never registered with the driver.")
}
disconnectedPodsByExecutorIdPendingRemoval.put(executorId, pod)
}
}

Expand All @@ -356,15 +362,16 @@ private[spark] class KubernetesClusterSchedulerBackend(
}

def isPodAlreadyReleased(pod: Pod): Boolean = {
val executorId = pod.getMetadata.getLabels.get(SPARK_EXECUTOR_ID_LABEL)
RUNNING_EXECUTOR_PODS_LOCK.synchronized {
!runningPodsToExecutors.contains(pod.getMetadata.getName)
!runningExecutorsToPods.contains(executorId)
}
}

def handleErroredPod(pod: Pod): Unit = {
def executorExitReasonOnError(pod: Pod): ExecutorExited = {
val containerExitStatus = getExecutorExitStatus(pod)
// container was probably actively killed by the driver.
val exitReason = if (isPodAlreadyReleased(pod)) {
if (isPodAlreadyReleased(pod)) {
ExecutorExited(containerExitStatus, exitCausedByApp = false,
s"Container in pod ${pod.getMetadata.getName} exited from explicit termination" +
" request.")
Expand All @@ -373,18 +380,16 @@ private[spark] class KubernetesClusterSchedulerBackend(
s"exited with exit status code $containerExitStatus."
ExecutorExited(containerExitStatus, exitCausedByApp = true, containerExitReason)
}
podsWithKnownExitReasons.put(pod.getMetadata.getName, exitReason)
}

def handleDeletedPod(pod: Pod): Unit = {
def executorExitReasonOnDelete(pod: Pod): ExecutorExited = {
val exitMessage = if (isPodAlreadyReleased(pod)) {
s"Container in pod ${pod.getMetadata.getName} exited from explicit termination request."
} else {
s"Pod ${pod.getMetadata.getName} deleted or lost."
}
val exitReason = ExecutorExited(
ExecutorExited(
getExecutorExitStatus(pod), exitCausedByApp = false, exitMessage)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fits in previous line.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

podsWithKnownExitReasons.put(pod.getMetadata.getName, exitReason)
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ import org.scalatest.{BeforeAndAfter, BeforeAndAfterEach}

import org.apache.spark.{SparkConf, SparkFunSuite}
import org.apache.spark.deploy.k8s.config._
import org.apache.spark.deploy.k8s.constants
import org.apache.spark.deploy.k8s.constants._

class ExecutorPodFactorySuite extends SparkFunSuite with BeforeAndAfter with BeforeAndAfterEach {
private val driverPodName: String = "driver-pod"
Expand Down Expand Up @@ -64,6 +64,7 @@ class ExecutorPodFactorySuite extends SparkFunSuite with BeforeAndAfter with Bef
// The executor pod name and default labels.
assert(executor.getMetadata.getName === s"$executorPrefix-exec-1")
assert(executor.getMetadata.getLabels.size() === 3)
assert(executor.getMetadata.getLabels.get(SPARK_EXECUTOR_ID_LABEL) === "1")

// There is exactly 1 container with no volume mounts and default memory limits.
// Default memory limit is 1024M + 384M (minimum overhead constant).
Expand Down Expand Up @@ -120,14 +121,13 @@ class ExecutorPodFactorySuite extends SparkFunSuite with BeforeAndAfter with Bef
// Check that the expected environment variables are present.
private def checkEnv(executor: Pod, additionalEnvVars: Map[String, String]): Unit = {
val defaultEnvs = Map(
constants.ENV_EXECUTOR_ID -> "1",
constants.ENV_DRIVER_URL -> "dummy",
constants.ENV_EXECUTOR_CORES -> "1",
constants.ENV_EXECUTOR_MEMORY -> "1g",
constants.ENV_APPLICATION_ID -> "dummy",
constants.ENV_MOUNTED_CLASSPATH -> "/var/spark-data/spark-jars/*",
constants.ENV_EXECUTOR_POD_IP -> null,
constants.ENV_EXECUTOR_PORT -> "10000") ++ additionalEnvVars
ENV_EXECUTOR_ID -> "1",
ENV_DRIVER_URL -> "dummy",
ENV_EXECUTOR_CORES -> "1",
ENV_EXECUTOR_MEMORY -> "1g",
ENV_APPLICATION_ID -> "dummy",
ENV_EXECUTOR_POD_IP -> null,
ENV_EXECUTOR_PORT -> "10000") ++ additionalEnvVars

assert(executor.getSpec.getContainers.size() === 1)
assert(executor.getSpec.getContainers.get(0).getEnv().size() === defaultEnvs.size)
Expand Down
Loading