Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion core/src/main/scala/org/apache/spark/util/Utils.scala
Original file line number Diff line number Diff line change
Expand Up @@ -824,7 +824,7 @@ private[spark] object Utils extends Logging {
*/
def randomizeInPlace[T](arr: Array[T], rand: Random = new Random): Array[T] = {
for (i <- (arr.length - 1) to 1 by -1) {
val j = rand.nextInt(i)
val j = rand.nextInt(i + 1)
val tmp = arr(j)
arr(j) = arr(i)
arr(i) = tmp
Expand Down
34 changes: 34 additions & 0 deletions core/src/test/scala/org/apache/spark/util/UtilsSuite.scala
Original file line number Diff line number Diff line change
Expand Up @@ -874,4 +874,38 @@ class UtilsSuite extends SparkFunSuite with ResetSystemProperties with Logging {
}
}
}

test("chi square test of randomizeInPlace") {
// Parameters
val arraySize = 10
val numTrials = 1000
val threshold = 0.05
val seed = 1L

// results[i][j]: how many times Utils.randomize moves an element from position j to position i
val results: Array[Array[Long]] = Array.ofDim(arraySize, arraySize)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor style things -- just omit the type here


// This must be seeded because even a fair random process will fail this test with
// probability equal to the value of `threshold`, which is inconvenient for a unit test.
val rand = new java.util.Random(seed)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import java.util.Random

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scala.util.Random is already imported, but Utils.randomizeInPlace requires a java.util.Random. I'm not sure what the right approach is here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah right, never mind me.

val range = 0 until arraySize

for {
_ <- 0 until numTrials
trial = Utils.randomizeInPlace(range.toArray, rand)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this ends up being a little hard to grok. Just do two nested loops

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if it's just me but I find this even harder to understand.

for (_ <- 0 until numTrials) {
  val trial = Utils.randomizeInPlace(range.toArray, rand)
  for (i <- range) {
    results(i)(trial(i)) += 1L
  }
}

?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@srowen IMHO, @nicklavers's original for comprehension follows a common and well-known Scala idiom. In my mind, it's simpler and easier to understand than a nested loop.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I'm not against it, esp. if nobody else speaks up otherwise.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, but perhaps your original version was easier to read than this chained form of nested loops though, on second thought here.

I've actually never seen this type of expression even in Scala. I'm not sure I'd call this well-known. I'm having trouble getting into the nested assignment mixed in with loop indices... aren't you technically generating a tuple at each iteration of each loop this way? when the 'product' of each loop is just 0-1 values, conceptually. I desugared it to see and that seems true. And everything but the body is in braces.

Digression: the version I suggested is certainly more like Java/C++/C#, and it's great that it's possible in Scala too. That has some limited value to readers. Lots of stuff is possible in Scala and some is obviously more compact, and therefore readable and less error-prone, and should be used. I think this is just difference from a standard expression for its own sake, to use syntax because it's merely possible in Scala. Lots of things can be written in a complicated way in Scala.

It's also not consistent with how the Spark code base is written.

I know it's a minor digression but sometimes worthwhile. I'd favor some kind of "compromise" solution like your original version, which felt a little more like the rest of the code base. I'd prefer a conventional loop construct like the rest of the code, but don't feel strongly about that.

i <- range
} results(i)(trial(i)) += 1L

val chi = new org.apache.commons.math3.stat.inference.ChiSquareTest()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

import; remove types below


// We expect an even distribution; this array will be rescaled by `chiSquareTest`
val expected: Array[Double] = Array.fill(arraySize * arraySize)(1.0)
val observed: Array[Long] = results.flatMap(x => x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flatten?


// Performs Pearson's chi-squared test. Using the sum-of-squares as the test statistic, gives
// the probability of a uniform distribution producing results as extreme as `observed`
val pValue: Double = chi.chiSquareTest(expected, observed)

assert(pValue > threshold)
}
}