Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
100 commits
Select commit Hold shift + click to select a range
86db9b2
[SPARK-22833][IMPROVEMENT] in SparkHive Scala Examples
chetkhatri Dec 23, 2017
ea2642e
[SPARK-20694][EXAMPLES] Update SQLDataSourceExample.scala
CNRui Dec 23, 2017
f6084a8
[HOTFIX] Fix Scala style checks
HyukjinKwon Dec 23, 2017
aeb45df
[SPARK-22844][R] Adds date_trunc in R API
HyukjinKwon Dec 23, 2017
1219d7a
[SPARK-22889][SPARKR] Set overwrite=T when install SparkR in tests
shivaram Dec 23, 2017
0bf1a74
[SPARK-22465][CORE] Add a safety-check to RDD defaultPartitioner
Dec 24, 2017
fba0313
[SPARK-22707][ML] Optimize CrossValidator memory occupation by models…
WeichenXu123 Dec 25, 2017
33ae243
[SPARK-22893][SQL] Unified the data type mismatch message
wangyum Dec 25, 2017
12d20dd
[SPARK-22874][PYSPARK][SQL][FOLLOW-UP] Modify error messages to show …
ueshin Dec 25, 2017
be03d3a
[SPARK-22893][SQL][HOTFIX] Fix a error message of VersionsSuite
dongjoon-hyun Dec 26, 2017
0e68330
[SPARK-20168][DSTREAM] Add changes to use kinesis fetches from specif…
yashs360 Dec 26, 2017
eb386be
[SPARK-21552][SQL] Add DecimalType support to ArrowWriter.
ueshin Dec 26, 2017
ff48b1b
[SPARK-22901][PYTHON] Add deterministic flag to pyspark UDF
mgaido91 Dec 26, 2017
9348e68
[SPARK-22833][EXAMPLE] Improvement SparkHive Scala Examples
cloud-fan Dec 26, 2017
91d1b30
[SPARK-22894][SQL] DateTimeOperations should accept SQL like string type
wangyum Dec 26, 2017
6674acd
[SPARK-22846][SQL] Fix table owner is null when creating table throug…
Dec 27, 2017
b8bfce5
[SPARK-22324][SQL][PYTHON][FOLLOW-UP] Update setup.py file.
ueshin Dec 27, 2017
774715d
[SPARK-22904][SQL] Add tests for decimal operations and string casts
mgaido91 Dec 27, 2017
753793b
[SPARK-22899][ML][STREAMING] Fix OneVsRestModel transform on streamin…
WeichenXu123 Dec 28, 2017
5683984
[SPARK-18016][SQL][FOLLOW-UP] Code Generation: Constant Pool Limit - …
kiszk Dec 28, 2017
32ec269
[SPARK-22909][SS] Move Structured Streaming v2 APIs to streaming folder
zsxwing Dec 28, 2017
171f6dd
[SPARK-22757][KUBERNETES] Enable use of remote dependencies (http, s3…
liyinan926 Dec 28, 2017
ded6d27
[SPARK-22648][K8S] Add documentation covering init containers and sec…
liyinan926 Dec 28, 2017
76e8a1d
[SPARK-22843][R] Adds localCheckpoint in R
HyukjinKwon Dec 28, 2017
1eebfbe
[SPARK-21208][R] Adds setLocalProperty and getLocalProperty in R
HyukjinKwon Dec 28, 2017
755f2f5
[SPARK-20392][SQL][FOLLOWUP] should not add extra AnalysisBarrier
cloud-fan Dec 28, 2017
2877817
[SPARK-22917][SQL] Should not try to generate histogram for empty/nul…
Dec 28, 2017
5536f31
[MINOR][BUILD] Fix Java linter errors
dongjoon-hyun Dec 28, 2017
8f6d573
[SPARK-22875][BUILD] Assembly build fails for a high user id
gerashegalov Dec 28, 2017
9c21ece
[SPARK-22836][UI] Show driver logs in UI when available.
Dec 28, 2017
613b71a
[SPARK-22890][TEST] Basic tests for DateTimeOperations
wangyum Dec 28, 2017
cfcd746
[SPARK-11035][CORE] Add in-process Spark app launcher.
Dec 28, 2017
ffe6fd7
[SPARK-22818][SQL] csv escape of quote escape
Dec 28, 2017
c745730
[SPARK-22905][MLLIB] Fix ChiSqSelectorModel save implementation
WeichenXu123 Dec 29, 2017
796e48c
[SPARK-22313][PYTHON][FOLLOWUP] Explicitly import warnings namespace …
HyukjinKwon Dec 29, 2017
67ea11e
[SPARK-22891][SQL] Make hive client creation thread safe
Dec 29, 2017
d4f0b1d
[SPARK-22834][SQL] Make insertion commands have real children to fix …
gengliangwang Dec 29, 2017
224375c
[SPARK-22892][SQL] Simplify some estimation logic by using double ins…
Dec 29, 2017
cc30ef8
[SPARK-22916][SQL] shouldn't bias towards build right if user does no…
Dec 29, 2017
fcf66a3
[SPARK-21657][SQL] optimize explode quadratic memory consumpation
uzadude Dec 29, 2017
dbd492b
[SPARK-22921][PROJECT-INFRA] Choices for Assigning Jira on Merge
squito Dec 29, 2017
11a849b
[SPARK-22370][SQL][PYSPARK][FOLLOW-UP] Fix a test failure when xmlrun…
ueshin Dec 29, 2017
8b49704
[SPARK-20654][CORE] Add config to limit disk usage of the history ser…
Dec 29, 2017
4e9e6ae
[SPARK-22864][CORE] Disable allocation schedule in ExecutorAllocation…
Dec 29, 2017
afc3641
[SPARK-22905][ML][FOLLOWUP] Fix GaussianMixtureModel save
zhengruifeng Dec 29, 2017
66a7d6b
[SPARK-22920][SPARKR] sql functions for current_date, current_timesta…
felixcheung Dec 29, 2017
ccda75b
[SPARK-22921][PROJECT-INFRA] Bug fix in jira assigning
squito Dec 29, 2017
30fcdc0
[SPARK-22922][ML][PYSPARK] Pyspark portion of the fit-multiple API
MrBago Dec 30, 2017
8169630
[SPARK-22734][ML][PYSPARK] Added Python API for VectorSizeHint.
MrBago Dec 30, 2017
2ea17af
[SPARK-22881][ML][TEST] ML regression package testsuite add Structure…
WeichenXu123 Dec 30, 2017
f2b3525
[SPARK-22771][SQL] Concatenate binary inputs into a binary output
maropu Dec 30, 2017
14c4a62
[SPARK-21475][Core]Revert "[SPARK-21475][CORE] Use NIO's Files API to…
zsxwing Dec 30, 2017
234d943
[TEST][MINOR] remove redundant `EliminateSubqueryAliases` in test code
wzhfy Dec 30, 2017
fd7d141
[SPARK-22919] Bump httpclient versions
Dec 30, 2017
ea0a5ee
[SPARK-22924][SPARKR] R API for sortWithinPartitions
felixcheung Dec 30, 2017
ee3af15
[SPARK-22363][SQL][TEST] Add unit test for Window spilling
gaborgsomogyi Dec 31, 2017
cfbe11e
[SPARK-22895][SQL] Push down the deterministic predicates that are af…
gatorsmile Dec 31, 2017
3d8837e
[SPARK-22397][ML] add multiple columns support to QuantileDiscretizer
huaxingao Dec 31, 2017
028ee40
[SPARK-22801][ML][PYSPARK] Allow FeatureHasher to treat numeric colum…
Dec 31, 2017
5955a2d
[MINOR][DOCS] s/It take/It takes/g
jkremser Dec 31, 2017
994065d
[SPARK-13030][ML] Create OneHotEncoderEstimator for OneHotEncoder as …
viirya Dec 31, 2017
f5b7714
[BUILD] Close stale PRs
srowen Jan 1, 2018
7a702d8
[SPARK-21616][SPARKR][DOCS] update R migration guide and vignettes
felixcheung Jan 1, 2018
c284c4e
[MINOR] Fix a bunch of typos
srowen Dec 31, 2017
1c9f95c
[SPARK-22530][PYTHON][SQL] Adding Arrow support for ArrayType
BryanCutler Jan 1, 2018
e734a4b
[SPARK-21893][SPARK-22142][TESTS][FOLLOWUP] Enables PySpark tests for…
HyukjinKwon Jan 1, 2018
e0c090f
[SPARK-22932][SQL] Refactor AnalysisContext
gatorsmile Jan 2, 2018
a6fc300
[SPARK-22897][CORE] Expose stageAttemptId in TaskContext
advancedxy Jan 2, 2018
247a089
[SPARK-22938] Assert that SQLConf.get is accessed only on the driver.
juliuszsompolski Jan 3, 2018
1a87a16
[SPARK-22934][SQL] Make optional clauses order insensitive for CREATE…
gatorsmile Jan 3, 2018
a66fe36
[SPARK-20236][SQL] dynamic partition overwrite
cloud-fan Jan 3, 2018
9a2b65a
[SPARK-22896] Improvement in String interpolation
chetkhatri Jan 3, 2018
b297029
[SPARK-20960][SQL] make ColumnVector public
cloud-fan Jan 3, 2018
7d045c5
[SPARK-22944][SQL] improve FoldablePropagation
cloud-fan Jan 4, 2018
df95a90
[SPARK-22933][SPARKR] R Structured Streaming API for withWatermark, t…
felixcheung Jan 4, 2018
9fa703e
[SPARK-22950][SQL] Handle ChildFirstURLClassLoader's parent
yaooqinn Jan 4, 2018
d5861ab
[SPARK-22945][SQL] add java UDF APIs in the functions object
cloud-fan Jan 4, 2018
5aadbc9
[SPARK-22939][PYSPARK] Support Spark UDF in registerFunction
gatorsmile Jan 4, 2018
6f68316
[SPARK-22771][SQL] Add a missing return statement in Concat.checkInpu…
maropu Jan 4, 2018
93f92c0
[SPARK-21475][CORE][2ND ATTEMPT] Change to use NIO's Files API for ex…
jerryshao Jan 4, 2018
d2cddc8
[SPARK-22850][CORE] Ensure queued events are delivered to all event q…
Jan 4, 2018
95f9659
[SPARK-22948][K8S] Move SparkPodInitContainer to correct package.
Jan 4, 2018
e288fc8
[SPARK-22953][K8S] Avoids adding duplicated secret volumes when init-…
liyinan926 Jan 4, 2018
0428368
[SPARK-22960][K8S] Make build-push-docker-images.sh more dev-friendly.
Jan 5, 2018
df7fc3e
[SPARK-22957] ApproxQuantile breaks if the number of rows exceeds MaxInt
juliuszsompolski Jan 5, 2018
52fc5c1
[SPARK-22825][SQL] Fix incorrect results of Casting Array to String
maropu Jan 5, 2018
cf0aa65
[SPARK-22949][ML] Apply CrossValidator approach to Driver/Distributed…
MrBago Jan 5, 2018
6cff7d1
[SPARK-22757][K8S] Enable spark.jars and spark.files in KUBERNETES mode
liyinan926 Jan 5, 2018
51c33bd
[SPARK-22961][REGRESSION] Constant columns should generate QueryPlanC…
adrian-ionescu Jan 5, 2018
c0b7424
[SPARK-22940][SQL] HiveExternalCatalogVersionsSuite should succeed on…
bersprockets Jan 5, 2018
930b90a
[SPARK-13030][ML] Follow-up cleanups for OneHotEncoderEstimator
jkbradley Jan 5, 2018
ea95683
[SPARK-22914][DEPLOY] Register history.ui.port
gerashegalov Jan 6, 2018
e8af7e8
[SPARK-22937][SQL] SQL elt output binary for binary inputs
maropu Jan 6, 2018
bf65cd3
[SPARK-22960][K8S] Revert use of ARG base_image in images
liyinan926 Jan 6, 2018
f2dd8b9
[SPARK-22930][PYTHON][SQL] Improve the description of Vectorized UDFs…
icexelloss Jan 6, 2018
be9a804
[SPARK-22793][SQL] Memory leak in Spark Thrift Server
Jan 6, 2018
7b78041
[SPARK-21786][SQL] When acquiring 'compressionCodecClassName' in 'Par…
fjh100456 Jan 6, 2018
993f215
[SPARK-22901][PYTHON][FOLLOWUP] Adds the doc for asNondeterministic f…
HyukjinKwon Jan 6, 2018
9a7048b
[HOTFIX] Fix style checking failure
gatorsmile Jan 6, 2018
18e9414
[SPARK-22973][SQL] Fix incorrect results of Casting Map to String
maropu Jan 7, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-22757][KUBERNETES] Enable use of remote dependencies (http, s3…
…, gcs, etc.) in Kubernetes mode

## What changes were proposed in this pull request?

This PR expands the Kubernetes mode to be able to use remote dependencies on http/https endpoints, GCS, S3, etc. It adds steps for configuring and appending the Kubernetes init-container into the driver and executor pods for downloading remote dependencies.
[Init-containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/), as the name suggests, are containers that are run to completion before the main containers start, and are often used to perform initialization tasks prior to starting the main containers. We use init-containers to localize remote application dependencies before the driver/executors start running. The code that the init-container runs is also included. This PR also adds a step to the driver and executors for mounting user-specified secrets that may store credentials for accessing data storage, e.g., S3 and Google Cloud Storage (GCS), into the driver and executors.

## How was this patch tested?

* The patch contains unit tests which are passing.
* Manual testing: `./build/mvn -Pkubernetes clean package` succeeded.
* Manual testing of the following cases:
  * [x] Running SparkPi using container-local spark-example jar.
  * [x] Running SparkPi using container-local spark-example jar with user-specific secret mounted.
  * [x] Running SparkPi using spark-example jar hosted remotely on an https endpoint.

cc rxin felixcheung mateiz (shepherd)
k8s-big-data SIG members & contributors: mccheah foxish ash211 ssuchter varunkatta kimoonkim erikerlandson tnachen ifilonenko liyinan926
reviewers: vanzin felixcheung jiangxb1987 mridulm

Author: Yinan Li <liyinan926@gmail.com>

Closes apache#19954 from liyinan926/init-container.
  • Loading branch information
liyinan926 authored and ueshin committed Dec 28, 2017
commit 171f6ddadc6185ffcc6ad82e5f48952fb49095b2
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,6 @@ import java.util.concurrent.TimeUnit

import org.apache.spark.internal.Logging
import org.apache.spark.internal.config.ConfigBuilder
import org.apache.spark.network.util.ByteUnit

private[spark] object Config extends Logging {

Expand Down Expand Up @@ -132,30 +131,84 @@ private[spark] object Config extends Logging {

val JARS_DOWNLOAD_LOCATION =
ConfigBuilder("spark.kubernetes.mountDependencies.jarsDownloadDir")
.doc("Location to download jars to in the driver and executors. When using" +
" spark-submit, this directory must be empty and will be mounted as an empty directory" +
" volume on the driver and executor pod.")
.doc("Location to download jars to in the driver and executors. When using " +
"spark-submit, this directory must be empty and will be mounted as an empty directory " +
"volume on the driver and executor pod.")
.stringConf
.createWithDefault("/var/spark-data/spark-jars")

val FILES_DOWNLOAD_LOCATION =
ConfigBuilder("spark.kubernetes.mountDependencies.filesDownloadDir")
.doc("Location to download files to in the driver and executors. When using" +
" spark-submit, this directory must be empty and will be mounted as an empty directory" +
" volume on the driver and executor pods.")
.doc("Location to download files to in the driver and executors. When using " +
"spark-submit, this directory must be empty and will be mounted as an empty directory " +
"volume on the driver and executor pods.")
.stringConf
.createWithDefault("/var/spark-data/spark-files")

val INIT_CONTAINER_IMAGE =
ConfigBuilder("spark.kubernetes.initContainer.image")
.doc("Image for the driver and executor's init-container for downloading dependencies.")
.stringConf
.createOptional

val INIT_CONTAINER_MOUNT_TIMEOUT =
ConfigBuilder("spark.kubernetes.mountDependencies.timeout")
.doc("Timeout before aborting the attempt to download and unpack dependencies from remote " +
"locations into the driver and executor pods.")
.timeConf(TimeUnit.SECONDS)
.createWithDefault(300)

val INIT_CONTAINER_MAX_THREAD_POOL_SIZE =
ConfigBuilder("spark.kubernetes.mountDependencies.maxSimultaneousDownloads")
.doc("Maximum number of remote dependencies to download simultaneously in a driver or " +
"executor pod.")
.intConf
.createWithDefault(5)

val INIT_CONTAINER_REMOTE_JARS =
ConfigBuilder("spark.kubernetes.initContainer.remoteJars")
.doc("Comma-separated list of jar URIs to download in the init-container. This is " +
"calculated from spark.jars.")
.internal()
.stringConf
.createOptional

val INIT_CONTAINER_REMOTE_FILES =
ConfigBuilder("spark.kubernetes.initContainer.remoteFiles")
.doc("Comma-separated list of file URIs to download in the init-container. This is " +
"calculated from spark.files.")
.internal()
.stringConf
.createOptional

val INIT_CONTAINER_CONFIG_MAP_NAME =
ConfigBuilder("spark.kubernetes.initContainer.configMapName")
.doc("Name of the config map to use in the init-container that retrieves submitted files " +
"for the executor.")
.internal()
.stringConf
.createOptional

val INIT_CONTAINER_CONFIG_MAP_KEY_CONF =
ConfigBuilder("spark.kubernetes.initContainer.configMapKey")
.doc("Key for the entry in the init container config map for submitted files that " +
"corresponds to the properties for this init-container.")
.internal()
.stringConf
.createOptional

val KUBERNETES_AUTH_SUBMISSION_CONF_PREFIX =
"spark.kubernetes.authenticate.submission"

val KUBERNETES_NODE_SELECTOR_PREFIX = "spark.kubernetes.node.selector."

val KUBERNETES_DRIVER_LABEL_PREFIX = "spark.kubernetes.driver.label."
val KUBERNETES_DRIVER_ANNOTATION_PREFIX = "spark.kubernetes.driver.annotation."
val KUBERNETES_DRIVER_SECRETS_PREFIX = "spark.kubernetes.driver.secrets."

val KUBERNETES_EXECUTOR_LABEL_PREFIX = "spark.kubernetes.executor.label."
val KUBERNETES_EXECUTOR_ANNOTATION_PREFIX = "spark.kubernetes.executor.annotation."
val KUBERNETES_EXECUTOR_SECRETS_PREFIX = "spark.kubernetes.executor.secrets."

val KUBERNETES_DRIVER_ENV_KEY = "spark.kubernetes.driverEnv."
}
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,17 @@ private[spark] object Constants {
val ENV_DRIVER_JAVA_OPTS = "SPARK_DRIVER_JAVA_OPTS"
val ENV_DRIVER_BIND_ADDRESS = "SPARK_DRIVER_BIND_ADDRESS"
val ENV_DRIVER_MEMORY = "SPARK_DRIVER_MEMORY"
val ENV_MOUNTED_FILES_DIR = "SPARK_MOUNTED_FILES_DIR"

// Bootstrapping dependencies with the init-container
val INIT_CONTAINER_DOWNLOAD_JARS_VOLUME_NAME = "download-jars-volume"
val INIT_CONTAINER_DOWNLOAD_FILES_VOLUME_NAME = "download-files-volume"
val INIT_CONTAINER_PROPERTIES_FILE_VOLUME = "spark-init-properties"
val INIT_CONTAINER_PROPERTIES_FILE_DIR = "/etc/spark-init"
val INIT_CONTAINER_PROPERTIES_FILE_NAME = "spark-init.properties"
val INIT_CONTAINER_PROPERTIES_FILE_PATH =
s"$INIT_CONTAINER_PROPERTIES_FILE_DIR/$INIT_CONTAINER_PROPERTIES_FILE_NAME"
val INIT_CONTAINER_SECRET_VOLUME_NAME = "spark-init-secret"

// Miscellaneous
val KUBERNETES_MASTER_INTERNAL_URL = "https://kubernetes.default.svc"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s

import scala.collection.JavaConverters._

import io.fabric8.kubernetes.api.model.{ContainerBuilder, EmptyDirVolumeSource, EnvVarBuilder, PodBuilder, VolumeMount, VolumeMountBuilder}

import org.apache.spark.{SparkConf, SparkException}
import org.apache.spark.deploy.k8s.Config._
import org.apache.spark.deploy.k8s.Constants._

/**
* Bootstraps an init-container for downloading remote dependencies. This is separated out from
* the init-container steps API because this component can be used to bootstrap init-containers
* for both the driver and executors.
*/
private[spark] class InitContainerBootstrap(
initContainerImage: String,
imagePullPolicy: String,
jarsDownloadPath: String,
filesDownloadPath: String,
configMapName: String,
configMapKey: String,
sparkRole: String,
sparkConf: SparkConf) {

/**
* Bootstraps an init-container that downloads dependencies to be used by a main container.
*/
def bootstrapInitContainer(
original: PodWithDetachedInitContainer): PodWithDetachedInitContainer = {
val sharedVolumeMounts = Seq[VolumeMount](
new VolumeMountBuilder()
.withName(INIT_CONTAINER_DOWNLOAD_JARS_VOLUME_NAME)
.withMountPath(jarsDownloadPath)
.build(),
new VolumeMountBuilder()
.withName(INIT_CONTAINER_DOWNLOAD_FILES_VOLUME_NAME)
.withMountPath(filesDownloadPath)
.build())

val customEnvVarKeyPrefix = sparkRole match {
case SPARK_POD_DRIVER_ROLE => KUBERNETES_DRIVER_ENV_KEY
case SPARK_POD_EXECUTOR_ROLE => "spark.executorEnv."
case _ => throw new SparkException(s"$sparkRole is not a valid Spark pod role")
}
val customEnvVars = sparkConf.getAllWithPrefix(customEnvVarKeyPrefix).toSeq.map {
case (key, value) =>
new EnvVarBuilder()
.withName(key)
.withValue(value)
.build()
}

val initContainer = new ContainerBuilder(original.initContainer)
.withName("spark-init")
.withImage(initContainerImage)
.withImagePullPolicy(imagePullPolicy)
.addAllToEnv(customEnvVars.asJava)
.addNewVolumeMount()
.withName(INIT_CONTAINER_PROPERTIES_FILE_VOLUME)
.withMountPath(INIT_CONTAINER_PROPERTIES_FILE_DIR)
.endVolumeMount()
.addToVolumeMounts(sharedVolumeMounts: _*)
.addToArgs(INIT_CONTAINER_PROPERTIES_FILE_PATH)
.build()

val podWithBasicVolumes = new PodBuilder(original.pod)
.editSpec()
.addNewVolume()
.withName(INIT_CONTAINER_PROPERTIES_FILE_VOLUME)
.withNewConfigMap()
.withName(configMapName)
.addNewItem()
.withKey(configMapKey)
.withPath(INIT_CONTAINER_PROPERTIES_FILE_NAME)
.endItem()
.endConfigMap()
.endVolume()
.addNewVolume()
.withName(INIT_CONTAINER_DOWNLOAD_JARS_VOLUME_NAME)
.withEmptyDir(new EmptyDirVolumeSource())
.endVolume()
.addNewVolume()
.withName(INIT_CONTAINER_DOWNLOAD_FILES_VOLUME_NAME)
.withEmptyDir(new EmptyDirVolumeSource())
.endVolume()
.endSpec()
.build()

val mainContainer = new ContainerBuilder(original.mainContainer)
.addToVolumeMounts(sharedVolumeMounts: _*)
.addNewEnv()
.withName(ENV_MOUNTED_FILES_DIR)
.withValue(filesDownloadPath)
.endEnv()
.build()

PodWithDetachedInitContainer(
podWithBasicVolumes,
initContainer,
mainContainer)
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,13 +14,49 @@
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s.submit
package org.apache.spark.deploy.k8s

import java.io.File

import io.fabric8.kubernetes.api.model.{Container, Pod, PodBuilder}

import org.apache.spark.SparkConf
import org.apache.spark.util.Utils

private[spark] object KubernetesFileUtils {
private[spark] object KubernetesUtils {

/**
* Extract and parse Spark configuration properties with a given name prefix and
* return the result as a Map. Keys must not have more than one value.
*
* @param sparkConf Spark configuration
* @param prefix the given property name prefix
* @return a Map storing the configuration property keys and values
*/
def parsePrefixedKeyValuePairs(
sparkConf: SparkConf,
prefix: String): Map[String, String] = {
sparkConf.getAllWithPrefix(prefix).toMap
}

def requireNandDefined(opt1: Option[_], opt2: Option[_], errMessage: String): Unit = {
opt1.foreach { _ => require(opt2.isEmpty, errMessage) }
}

/**
* Append the given init-container to a pod's list of init-containers.
*
* @param originalPodSpec original specification of the pod
* @param initContainer the init-container to add to the pod
* @return the pod with the init-container added to the list of InitContainers
*/
def appendInitContainer(originalPodSpec: Pod, initContainer: Container): Pod = {
new PodBuilder(originalPodSpec)
.editOrNewSpec()
.addToInitContainers(initContainer)
.endSpec()
.build()
}

/**
* For the given collection of file URIs, resolves them as follows:
Expand All @@ -47,6 +83,16 @@ private[spark] object KubernetesFileUtils {
}
}

/**
* Get from a given collection of file URIs the ones that represent remote files.
*/
def getOnlyRemoteFiles(uris: Iterable[String]): Iterable[String] = {
uris.filter { uri =>
val scheme = Utils.resolveURI(uri).getScheme
scheme != "file" && scheme != "local"
}
}

private def resolveFileUri(
uri: String,
fileDownloadPath: String,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.deploy.k8s

import io.fabric8.kubernetes.api.model.{Container, ContainerBuilder, Pod, PodBuilder}

/**
* Bootstraps a driver or executor container or an init-container with needed secrets mounted.
*/
private[spark] class MountSecretsBootstrap(secretNamesToMountPaths: Map[String, String]) {

/**
* Mounts Kubernetes secrets as secret volumes into the given container in the given pod.
*
* @param pod the pod into which the secret volumes are being added.
* @param container the container into which the secret volumes are being mounted.
* @return the updated pod and container with the secrets mounted.
*/
def mountSecrets(pod: Pod, container: Container): (Pod, Container) = {
var podBuilder = new PodBuilder(pod)
secretNamesToMountPaths.keys.foreach { name =>
podBuilder = podBuilder
.editOrNewSpec()
.addNewVolume()
.withName(secretVolumeName(name))
.withNewSecret()
.withSecretName(name)
.endSecret()
.endVolume()
.endSpec()
}

var containerBuilder = new ContainerBuilder(container)
secretNamesToMountPaths.foreach { case (name, path) =>
containerBuilder = containerBuilder
.addNewVolumeMount()
.withName(secretVolumeName(name))
.withMountPath(path)
.endVolumeMount()
}

(podBuilder.build(), containerBuilder.build())
}

private def secretVolumeName(secretName: String): String = {
secretName + "-volume"
}
}
Loading