Skip to content

Commit b9f0090

Browse files
committed
Merge remote-tracking branch 'upstream/master'
2 parents 2ee1876 + 90de6b2 commit b9f0090

File tree

106 files changed

+2927
-813
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+2927
-813
lines changed

R/pkg/R/serialize.R

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ writeObject <- function(con, object, writeType = TRUE) {
5454
# passing in vectors as arrays and instead require arrays to be passed
5555
# as lists.
5656
type <- class(object)[[1]] # class of POSIXlt is c("POSIXlt", "POSIXt")
57-
# Checking types is needed here, since is.na only handles atomic vectors,
57+
# Checking types is needed here, since 'is.na' only handles atomic vectors,
5858
# lists and pairlists
5959
if (type %in% c("integer", "character", "logical", "double", "numeric")) {
6060
if (is.na(object)) {

conf/spark-env.sh.template

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@
4141
# - SPARK_EXECUTOR_MEMORY, Memory per Executor (e.g. 1000M, 2G) (Default: 1G)
4242
# - SPARK_DRIVER_MEMORY, Memory for Driver (e.g. 1000M, 2G) (Default: 1G)
4343
# - SPARK_YARN_APP_NAME, The name of your application (Default: Spark)
44-
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: default)
44+
# - SPARK_YARN_QUEUE, The hadoop queue to use for allocation requests (Default: 'default')
4545
# - SPARK_YARN_DIST_FILES, Comma separated list of files to be distributed with the job.
4646
# - SPARK_YARN_DIST_ARCHIVES, Comma separated list of archives to be distributed with the job.
4747

core/src/main/resources/org/apache/spark/ui/static/jsonFormatter.min.js

Lines changed: 0 additions & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

core/src/main/scala/org/apache/spark/SecurityManager.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ import org.apache.spark.util.Utils
6969
*
7070
* - HTTP for broadcast and file server (via HttpServer) -> Spark currently uses Jetty
7171
* for the HttpServer. Jetty supports multiple authentication mechanisms -
72-
* Basic, Digest, Form, Spengo, etc. It also supports multiple different login
72+
* Basic, Digest, Form, Spnego, etc. It also supports multiple different login
7373
* services - Hash, JAAS, Spnego, JDBC, etc. Spark currently uses the HashLoginService
7474
* to authenticate using DIGEST-MD5 via a single user and the shared secret.
7575
* Since we are using DIGEST-MD5, the shared secret is not passed on the wire

core/src/main/scala/org/apache/spark/api/java/JavaSparkContext.scala

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -774,6 +774,16 @@ class JavaSparkContext(val sc: SparkContext)
774774

775775
/** Cancel all jobs that have been scheduled or are running. */
776776
def cancelAllJobs(): Unit = sc.cancelAllJobs()
777+
778+
/**
779+
* Returns an Java map of JavaRDDs that have marked themselves as persistent via cache() call.
780+
* Note that this does not necessarily mean the caching or computation was successful.
781+
*/
782+
def getPersistentRDDs: JMap[java.lang.Integer, JavaRDD[_]] = {
783+
sc.getPersistentRDDs.mapValues(s => JavaRDD.fromRDD(s))
784+
.asJava.asInstanceOf[JMap[java.lang.Integer, JavaRDD[_]]]
785+
}
786+
777787
}
778788

779789
object JavaSparkContext {

0 commit comments

Comments
 (0)