Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
7570eab
[SPARK-22788][STREAMING] Use correct hadoop config for fs append supp…
Dec 20, 2017
7798c9e
[SPARK-22824] Restore old offset for binary compatibility
jose-torres Dec 20, 2017
d762d11
[SPARK-22832][ML] BisectingKMeans unpersist unused datasets
zhengruifeng Dec 20, 2017
c89b431
[SPARK-22849] ivy.retrieve pattern should also consider `classifier`
gatorsmile Dec 20, 2017
792915c
[SPARK-22830] Scala Coding style has been improved in Spark Examples
chetkhatri Dec 20, 2017
b176014
[SPARK-22847][CORE] Remove redundant code in AppStatusListener while …
Ngone51 Dec 20, 2017
0114c89
[SPARK-22845][SCHEDULER] Modify spark.kubernetes.allocation.batch.del…
foxish Dec 21, 2017
fb0562f
[SPARK-22810][ML][PYSPARK] Expose Python API for LinearRegression wit…
yanboliang Dec 21, 2017
9c289a5
[SPARK-22387][SQL] Propagate session configs to data source read/writ…
jiangxb1987 Dec 21, 2017
d3ae3e1
[SPARK-19634][SQL][ML][FOLLOW-UP] Improve interface of dataframe vect…
WeichenXu123 Dec 21, 2017
cb9fc8d
[SPARK-22848][SQL] Eliminate mutable state from Stack
kiszk Dec 21, 2017
59d5263
[SPARK-22324][SQL][PYTHON] Upgrade Arrow to 0.8.0
BryanCutler Dec 21, 2017
0abaf31
[SPARK-22852][BUILD] Exclude -Xlint:unchecked from sbt javadoc flags
easel Dec 21, 2017
4c2efde
[SPARK-22855][BUILD] Add -no-java-comments to sbt docs/scalacOptions
easel Dec 21, 2017
8a0ed5a
[SPARK-22668][SQL] Ensure no global variables in arguments of method …
cloud-fan Dec 21, 2017
d3a1d95
[SPARK-22786][SQL] only use AppStatusPlugin in history server
cloud-fan Dec 21, 2017
4e107fd
[SPARK-22822][TEST] Basic tests for WindowFrameCoercion and DecimalPr…
wangyum Dec 21, 2017
fe65361
[SPARK-22042][FOLLOW-UP][SQL] ReorderJoinPredicates can break when ch…
tejasapatil Dec 21, 2017
7beb375
[SPARK-22861][SQL] SQLAppStatusListener handles multi-job executions.
squito Dec 21, 2017
7ab165b
[SPARK-22648][K8S] Spark on Kubernetes - Documentation
foxish Dec 22, 2017
c0abb1d
[SPARK-22854][UI] Read Spark version from event logs.
Dec 22, 2017
c6f01ca
[SPARK-22750][SQL] Reuse mutable states when possible
mgaido91 Dec 22, 2017
a36b78b
[SPARK-22450][CORE][MLLIB][FOLLOWUP] safely register class for mllib …
zhengruifeng Dec 22, 2017
22e1849
[SPARK-22866][K8S] Fix path issue in Kubernetes dockerfile
foxish Dec 22, 2017
8df1da3
[SPARK-22862] Docs on lazy elimination of columns missing from an enc…
marmbrus Dec 22, 2017
13190a4
[SPARK-22874][PYSPARK][SQL] Modify checking pandas version to use Loo…
ueshin Dec 22, 2017
d23dc5b
[SPARK-22346][ML] VectorSizeHint Transformer for using VectorAssemble…
MrBago Dec 22, 2017
d3cbbdd
[SPARK-22757][Kubernetes] Enable use of remote dependencies in Kubern…
liyinan926 Dec 12, 2017
5d2cbc8
Addressed first round of comments
liyinan926 Dec 15, 2017
4ee76af
Addressed the second round of comments
liyinan926 Dec 16, 2017
9c8051a
Create one task per jar/file to download in the init-container
liyinan926 Dec 16, 2017
1f65417
More review comments
liyinan926 Dec 18, 2017
109ad80
Shorten variable names
liyinan926 Dec 19, 2017
c21fdcf
Removed traits that have only a single implementation
liyinan926 Dec 19, 2017
a3cd71d
Remove unused class arguments
liyinan926 Dec 19, 2017
23c5cd9
Improved documentation
liyinan926 Dec 19, 2017
2ec15c4
Addressed latest round of comments
liyinan926 Dec 20, 2017
5d1f889
Addressed more comments
liyinan926 Dec 21, 2017
9d9c841
Updated names of two configuration properties
liyinan926 Dec 22, 2017
c51bc56
Addressed more comments
liyinan926 Dec 25, 2017
28343fb
Addressed one more comment
liyinan926 Dec 26, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-22830] Scala Coding style has been improved in Spark Examples
## What changes were proposed in this pull request?

* Under Spark Scala Examples: Some of the syntax were written like Java way, It has been re-written as per scala style guide.
* Most of all changes are followed to println() statement.

## How was this patch tested?

Since, All changes proposed are re-writing println statements in scala way, manual run used to test println.

Author: chetkhatri <[email protected]>

Closes apache#20016 from chetkhatri/scala-style-spark-examples.
  • Loading branch information
chetkhatri authored and srowen committed Dec 20, 2017
commit 792915c8449b606cfdd50401fb349194a2558c36
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ object BroadcastTest {
val arr1 = (0 until num).toArray

for (i <- 0 until 3) {
println("Iteration " + i)
println(s"Iteration $i")
println("===========")
val startTime = System.nanoTime
val barr1 = sc.broadcast(arr1)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,10 @@ object DFSReadWriteTest {
}

private def printUsage(): Unit = {
val usage: String = "DFS Read-Write Test\n" +
"\n" +
"Usage: localFile dfsDir\n" +
"\n" +
"localFile - (string) local file to use in test\n" +
"dfsDir - (string) DFS directory for read/write tests\n"
val usage = """DFS Read-Write Test
|Usage: localFile dfsDir
|localFile - (string) local file to use in test
|dfsDir - (string) DFS directory for read/write tests""".stripMargin

println(usage)
}
Expand All @@ -69,13 +67,13 @@ object DFSReadWriteTest {

localFilePath = new File(args(i))
if (!localFilePath.exists) {
System.err.println("Given path (" + args(i) + ") does not exist.\n")
System.err.println(s"Given path (${args(i)}) does not exist")
printUsage()
System.exit(1)
}

if (!localFilePath.isFile) {
System.err.println("Given path (" + args(i) + ") is not a file.\n")
System.err.println(s"Given path (${args(i)}) is not a file")
printUsage()
System.exit(1)
}
Expand Down Expand Up @@ -108,7 +106,7 @@ object DFSReadWriteTest {
.getOrCreate()

println("Writing local file to DFS")
val dfsFilename = dfsDirPath + "/dfs_read_write_test"
val dfsFilename = s"$dfsDirPath/dfs_read_write_test"
val fileRDD = spark.sparkContext.parallelize(fileContents)
fileRDD.saveAsTextFile(dfsFilename)

Expand All @@ -127,11 +125,11 @@ object DFSReadWriteTest {
spark.stop()

if (localWordCount == dfsWordCount) {
println(s"Success! Local Word Count ($localWordCount) " +
s"and DFS Word Count ($dfsWordCount) agree.")
println(s"Success! Local Word Count $localWordCount and " +
s"DFS Word Count $dfsWordCount agree.")
} else {
println(s"Failure! Local Word Count ($localWordCount) " +
s"and DFS Word Count ($dfsWordCount) disagree.")
println(s"Failure! Local Word Count $localWordCount " +
s"and DFS Word Count $dfsWordCount disagree.")
}

}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ object HdfsTest {
val start = System.currentTimeMillis()
for (x <- mapped) { x + 2 }
val end = System.currentTimeMillis()
println("Iteration " + iter + " took " + (end-start) + " ms")
println(s"Iteration $iter took ${end-start} ms")
}
spark.stop()
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -129,8 +129,7 @@ object LocalALS {
println(s"Iteration $iter:")
ms = (0 until M).map(i => updateMovie(i, ms(i), us, R)).toArray
us = (0 until U).map(j => updateUser(j, us(j), ms, R)).toArray
println("RMSE = " + rmse(R, ms, us))
println()
println(s"RMSE = ${rmse(R, ms, us)}")
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -58,10 +58,10 @@ object LocalFileLR {

// Initialize w to a random value
val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
println("Initial w: " + w)
println(s"Initial w: $w")

for (i <- 1 to ITERATIONS) {
println("On iteration " + i)
println(s"On iteration $i")
val gradient = DenseVector.zeros[Double](D)
for (p <- points) {
val scale = (1 / (1 + math.exp(-p.y * (w.dot(p.x)))) - 1) * p.y
Expand All @@ -71,7 +71,7 @@ object LocalFileLR {
}

fileSrc.close()
println("Final w: " + w)
println(s"Final w: $w")
}
}
// scalastyle:on println
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ object LocalKMeans {
kPoints.put(i, iter.next())
}

println("Initial centers: " + kPoints)
println(s"Initial centers: $kPoints")

while(tempDist > convergeDist) {
val closest = data.map (p => (closestPoint(p, kPoints), (p, 1)))
Expand All @@ -114,7 +114,7 @@ object LocalKMeans {
}
}

println("Final centers: " + kPoints)
println(s"Final centers: $kPoints")
}
}
// scalastyle:on println
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,10 @@ object LocalLR {
val data = generateData
// Initialize w to a random value
val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
println("Initial w: " + w)
println(s"Initial w: $w")

for (i <- 1 to ITERATIONS) {
println("On iteration " + i)
println(s"On iteration $i")
val gradient = DenseVector.zeros[Double](D)
for (p <- data) {
val scale = (1 / (1 + math.exp(-p.y * (w.dot(p.x)))) - 1) * p.y
Expand All @@ -73,7 +73,7 @@ object LocalLR {
w -= gradient
}

println("Final w: " + w)
println(s"Final w: $w")
}
}
// scalastyle:on println
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ object LocalPi {
val y = random * 2 - 1
if (x*x + y*y <= 1) count += 1
}
println("Pi is roughly " + 4 * count / 100000.0)
println(s"Pi is roughly ${4 * count / 100000.0}")
}
}
// scalastyle:on println
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ object SimpleSkewedGroupByTest {
// Enforce that everything has been calculated and in cache
pairs1.count

println("RESULT: " + pairs1.groupByKey(numReducers).count)
println(s"RESULT: ${pairs1.groupByKey(numReducers).count}")
// Print how many keys each reducer got (for debugging)
// println("RESULT: " + pairs1.groupByKey(numReducers)
// .map{case (k,v) => (k, v.size)}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -135,10 +135,8 @@ object SparkALS {
.map(i => update(i, usb.value(i), msb.value, Rc.value.transpose()))
.collect()
usb = sc.broadcast(us) // Re-broadcast us because it was updated
println("RMSE = " + rmse(R, ms, us))
println()
println(s"RMSE = ${rmse(R, ms, us)}")
}

spark.stop()
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,17 +79,17 @@ object SparkHdfsLR {

// Initialize w to a random value
val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
println("Initial w: " + w)
println(s"Initial w: $w")

for (i <- 1 to ITERATIONS) {
println("On iteration " + i)
println(s"On iteration $i")
val gradient = points.map { p =>
p.x * (1 / (1 + exp(-p.y * (w.dot(p.x)))) - 1) * p.y
}.reduce(_ + _)
w -= gradient
}

println("Final w: " + w)
println(s"Final w: $w")
spark.stop()
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ object SparkKMeans {
for (newP <- newPoints) {
kPoints(newP._1) = newP._2
}
println("Finished iteration (delta = " + tempDist + ")")
println(s"Finished iteration (delta = $tempDist)")
}

println("Final centers:")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,17 +73,17 @@ object SparkLR {

// Initialize w to a random value
val w = DenseVector.fill(D) {2 * rand.nextDouble - 1}
println("Initial w: " + w)
println(s"Initial w: $w")

for (i <- 1 to ITERATIONS) {
println("On iteration " + i)
println(s"On iteration $i")
val gradient = points.map { p =>
p.x * (1 / (1 + exp(-p.y * (w.dot(p.x)))) - 1) * p.y
}.reduce(_ + _)
w -= gradient
}

println("Final w: " + w)
println(s"Final w: $w")

spark.stop()
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ object SparkPageRank {
}

val output = ranks.collect()
output.foreach(tup => println(tup._1 + " has rank: " + tup._2 + "."))
output.foreach(tup => println(s"${tup._1} has rank: ${tup._2} ."))

spark.stop()
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ object SparkPi {
val y = random * 2 - 1
if (x*x + y*y <= 1) 1 else 0
}.reduce(_ + _)
println("Pi is roughly " + 4.0 * count / (n - 1))
println(s"Pi is roughly ${4.0 * count / (n - 1)}")
spark.stop()
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ object SparkTC {
nextCount = tc.count()
} while (nextCount != oldCount)

println("TC has " + tc.count() + " edges.")
println(s"TC has ${tc.count()} edges.")
spark.stop()
}
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,19 +27,20 @@ import org.apache.spark.graphx.lib._
import org.apache.spark.internal.Logging
import org.apache.spark.storage.StorageLevel


/**
* Driver program for running graph algorithms.
*/
object Analytics extends Logging {

def main(args: Array[String]): Unit = {
if (args.length < 2) {
System.err.println(
"Usage: Analytics <taskType> <file> --numEPart=<num_edge_partitions> [other options]")
System.err.println("Supported 'taskType' as follows:")
System.err.println(" pagerank Compute PageRank")
System.err.println(" cc Compute the connected components of vertices")
System.err.println(" triangles Count the number of triangles")
val usage = """Usage: Analytics <taskType> <file> --numEPart=<num_edge_partitions>
|[other options] Supported 'taskType' as follows:
|pagerank Compute PageRank
|cc Compute the connected components of vertices
|triangles Count the number of triangles""".stripMargin
System.err.println(usage)
System.exit(1)
}

Expand All @@ -48,7 +49,7 @@ object Analytics extends Logging {
val optionsList = args.drop(2).map { arg =>
arg.dropWhile(_ == '-').split('=') match {
case Array(opt, v) => (opt -> v)
case _ => throw new IllegalArgumentException("Invalid argument: " + arg)
case _ => throw new IllegalArgumentException(s"Invalid argument: $arg")
}
}
val options = mutable.Map(optionsList: _*)
Expand All @@ -74,68 +75,68 @@ object Analytics extends Logging {
val numIterOpt = options.remove("numIter").map(_.toInt)

options.foreach {
case (opt, _) => throw new IllegalArgumentException("Invalid option: " + opt)
case (opt, _) => throw new IllegalArgumentException(s"Invalid option: $opt")
}

println("======================================")
println("| PageRank |")
println("======================================")

val sc = new SparkContext(conf.setAppName("PageRank(" + fname + ")"))
val sc = new SparkContext(conf.setAppName(s"PageRank($fname)"))

val unpartitionedGraph = GraphLoader.edgeListFile(sc, fname,
numEdgePartitions = numEPart,
edgeStorageLevel = edgeStorageLevel,
vertexStorageLevel = vertexStorageLevel).cache()
val graph = partitionStrategy.foldLeft(unpartitionedGraph)(_.partitionBy(_))

println("GRAPHX: Number of vertices " + graph.vertices.count)
println("GRAPHX: Number of edges " + graph.edges.count)
println(s"GRAPHX: Number of vertices ${graph.vertices.count}")
println(s"GRAPHX: Number of edges ${graph.edges.count}")

val pr = (numIterOpt match {
case Some(numIter) => PageRank.run(graph, numIter)
case None => PageRank.runUntilConvergence(graph, tol)
}).vertices.cache()

println("GRAPHX: Total rank: " + pr.map(_._2).reduce(_ + _))
println(s"GRAPHX: Total rank: ${pr.map(_._2).reduce(_ + _)}")

if (!outFname.isEmpty) {
logWarning("Saving pageranks of pages to " + outFname)
logWarning(s"Saving pageranks of pages to $outFname")
pr.map { case (id, r) => id + "\t" + r }.saveAsTextFile(outFname)
}

sc.stop()

case "cc" =>
options.foreach {
case (opt, _) => throw new IllegalArgumentException("Invalid option: " + opt)
case (opt, _) => throw new IllegalArgumentException(s"Invalid option: $opt")
}

println("======================================")
println("| Connected Components |")
println("======================================")

val sc = new SparkContext(conf.setAppName("ConnectedComponents(" + fname + ")"))
val sc = new SparkContext(conf.setAppName(s"ConnectedComponents($fname)"))
val unpartitionedGraph = GraphLoader.edgeListFile(sc, fname,
numEdgePartitions = numEPart,
edgeStorageLevel = edgeStorageLevel,
vertexStorageLevel = vertexStorageLevel).cache()
val graph = partitionStrategy.foldLeft(unpartitionedGraph)(_.partitionBy(_))

val cc = ConnectedComponents.run(graph)
println("Components: " + cc.vertices.map { case (vid, data) => data }.distinct())
println(s"Components: ${cc.vertices.map { case (vid, data) => data }.distinct()}")
sc.stop()

case "triangles" =>
options.foreach {
case (opt, _) => throw new IllegalArgumentException("Invalid option: " + opt)
case (opt, _) => throw new IllegalArgumentException(s"Invalid option: $opt")
}

println("======================================")
println("| Triangle Count |")
println("======================================")

val sc = new SparkContext(conf.setAppName("TriangleCount(" + fname + ")"))
val sc = new SparkContext(conf.setAppName(s"TriangleCount($fname)"))
val graph = GraphLoader.edgeListFile(sc, fname,
canonicalOrientation = true,
numEdgePartitions = numEPart,
Expand Down