Skip to content
Prev Previous commit
Next Next commit
style changes
  • Loading branch information
nicklavers committed Aug 9, 2016
commit e33741cf15e47c64a7cb83383e83db88531e2539
17 changes: 10 additions & 7 deletions core/src/test/scala/org/apache/spark/util/UtilsSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ import scala.util.Random

import com.google.common.io.Files
import org.apache.commons.lang3.SystemUtils
import org.apache.commons.math3.stat.inference.ChiSquareTest
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path

Expand Down Expand Up @@ -882,8 +883,8 @@ class UtilsSuite extends SparkFunSuite with ResetSystemProperties with Logging {
val threshold = 0.05
val seed = 1L

// results[i][j]: how many times Utils.randomize moves an element from position j to position i
val results: Array[Array[Long]] = Array.ofDim(arraySize, arraySize)
// results(i)(j): how many times Utils.randomize moves an element from position j to position i
val results = Array.ofDim[Long](arraySize, arraySize)

// This must be seeded because even a fair random process will fail this test with
// probability equal to the value of `threshold`, which is inconvenient for a unit test.
Expand All @@ -893,18 +894,20 @@ class UtilsSuite extends SparkFunSuite with ResetSystemProperties with Logging {
for {
_ <- 0 until numTrials
trial = Utils.randomizeInPlace(range.toArray, rand)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this ends up being a little hard to grok. Just do two nested loops

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if it's just me but I find this even harder to understand.

for (_ <- 0 until numTrials) {
  val trial = Utils.randomizeInPlace(range.toArray, rand)
  for (i <- range) {
    results(i)(trial(i)) += 1L
  }
}

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srowen IMHO, @nicklavers's original for comprehension follows a common and well-known Scala idiom. In my mind, it's simpler and easier to understand than a nested loop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'm not against it, esp. if nobody else speaks up otherwise.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, but perhaps your original version was easier to read than this chained form of nested loops though, on second thought here.

I've actually never seen this type of expression even in Scala. I'm not sure I'd call this well-known. I'm having trouble getting into the nested assignment mixed in with loop indices... aren't you technically generating a tuple at each iteration of each loop this way? when the 'product' of each loop is just 0-1 values, conceptually. I desugared it to see and that seems true. And everything but the body is in braces.

Digression: the version I suggested is certainly more like Java/C++/C#, and it's great that it's possible in Scala too. That has some limited value to readers. Lots of stuff is possible in Scala and some is obviously more compact, and therefore readable and less error-prone, and should be used. I think this is just difference from a standard expression for its own sake, to use syntax because it's merely possible in Scala. Lots of things can be written in a complicated way in Scala.

It's also not consistent with how the Spark code base is written.

I know it's a minor digression but sometimes worthwhile. I'd favor some kind of "compromise" solution like your original version, which felt a little more like the rest of the code base. I'd prefer a conventional loop construct like the rest of the code, but don't feel strongly about that.

} for {
i <- range
} results(i)(trial(i)) += 1L
j = trial(i)
} results(i)(j) += 1L

val chi = new org.apache.commons.math3.stat.inference.ChiSquareTest()
val chi = new ChiSquareTest()

// We expect an even distribution; this array will be rescaled by `chiSquareTest`
val expected: Array[Double] = Array.fill(arraySize * arraySize)(1.0)
val observed: Array[Long] = results.flatMap(x => x)
val expected = Array.fill(arraySize * arraySize)(1.0)
val observed = results.flatten

// Performs Pearson's chi-squared test. Using the sum-of-squares as the test statistic, gives
// the probability of a uniform distribution producing results as extreme as `observed`
val pValue: Double = chi.chiSquareTest(expected, observed)
val pValue = chi.chiSquareTest(expected, observed)

assert(pValue > threshold)
}
Expand Down