Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
1bbd48c
First Commit of LSH function implementation. Implement basic Estimato…
Yunni Sep 13, 2016
ca46d82
Implementation of Approximate Nearest Neighbors. Add distCol as anoth…
Yunni Sep 13, 2016
c693f5b
Implement approxSimilarityJoin(). Remove modelDataset and distCol as …
Yunni Sep 15, 2016
c9ee0f9
Add test utility method to check LSH property. Tested on random proje…
Yunni Sep 19, 2016
fc838e0
Add testing utility for approximate nearest neighbor. Run the testing…
Yunni Sep 19, 2016
aa138e8
Add testing utility for approximate similarity join. Run the testing …
Yunni Sep 19, 2016
bbcbcf0
Code review comments. A new unit test of k nearest neighbor for large k
Sep 19, 2016
d389159
Code review comments. A new unit test of k nearest neighbor for large k
Sep 19, 2016
19d012a
(1) Refactor hashDistCol for nearest neighbor search. (2) Add scalado…
Sep 19, 2016
269c8c9
Code Review comments: (1) Rewrite hashDistance (2) Move the lsh packa…
Yunni Sep 20, 2016
9065f7d
Add comment to clarify the implementation of RandomProjection
Yunni Sep 20, 2016
d22dff4
Implementation of MinHash with unit tests
Yunni Sep 26, 2016
7e6d938
Add options for Probing Single/Multiple bucket(s) in approxNearestNei…
Yunni Sep 26, 2016
0fad3ef
Allow users to transform datasets themselves before doing approxNeare…
Yunni Sep 26, 2016
0080b87
Generalize Input types to Vector. For MinHash, use Sparse Vectors to …
Yunni Sep 28, 2016
a1c344b
Code Review Comments
Yunni Sep 28, 2016
396ad60
Bug fixed. Typo of distCol
Yunni Sep 28, 2016
b79ebbd
Fix Jenkins Build. Explicitly annotate type of modelDataset
Yunni Sep 28, 2016
7936315
Move all code to org.apache.spark.ml.feature
Yunni Sep 28, 2016
f805658
Tune threshold for approxNearestNeighbors unit tests
Yunni Sep 28, 2016
8f04ee8
Fix import ordering
Yunni Sep 28, 2016
f82f3fe
Add scaladoc for overloaded methods
Yunni Sep 28, 2016
ccd98f7
Code review comments
Yunni Oct 4, 2016
69efc84
Move private[ml] to MinHash constructor
Yunni Oct 4, 2016
eced98d
Detailed doc on bucketLength. Move private[ml] to Model constructor
Yunni Oct 4, 2016
3487bcc
Tune threshold for MinHash
Oct 4, 2016
df19886
Code review comments
Oct 5, 2016
efe323c
Code Review Comments
Yunni Oct 10, 2016
142d8e9
Revert unrelated changes
Yunni Oct 10, 2016
40d1f1b
Code review comments for MinHash: (1) Compute unionSize based on setS…
Oct 10, 2016
2c95e5c
Code review comments
Yunni Oct 11, 2016
fb120af
SignRandomProjection: LSH Classes for cosine distance metrics
Oct 11, 2016
19f6d89
Change hashFunctions to Arrays
Oct 11, 2016
1b63173
BitSampling: LSH Class for Hamming Distance
Oct 12, 2016
126d980
Merge remote-tracking branch 'upstream/master' into SPARK-5992-yunn-lsh
Yunni Oct 13, 2016
a35e261
Move distinct() before calculating the distance to improve running time
Yunni Oct 13, 2016
66d553a
For similarity join, expose leftCol and rightCol as parameters
Yunni Oct 17, 2016
cad4ecb
Code Review comments: (1) Save BitSampling and SignRandomProjection f…
Yunni Oct 22, 2016
e14f73e
(1) Reset all random seed != 0 (2) Add docstring about the output sch…
Yunni Oct 23, 2016
1c4b9fb
(1) Add readers/writers (2) Change unit tests thresholds to more rebo…
Oct 27, 2016
20a9ebf
Change a few Since annotations
Oct 27, 2016
9bb3fd6
Code Review Comments: (1) Remove all Since in LSH (2) Add doc on hash…
Oct 27, 2016
9a3704c
Organize the scaladoc
Oct 27, 2016
6cda936
Remove default values for outputCol
Oct 28, 2016
97e1238
Remove default values for outputCol
Oct 28, 2016
3570845
Add more scaladoc
Oct 28, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Code Review Comments: (1) Remove all Since in LSH (2) Add doc on hash…
… functions in Min Hash
  • Loading branch information
Yun Ni committed Oct 27, 2016
commit 9bb3fd607519d245f72afedf95def63e0e7400a7
20 changes: 0 additions & 20 deletions mllib/src/main/scala/org/apache/spark/ml/feature/LSH.scala
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ package org.apache.spark.ml.feature

import scala.util.Random

import org.apache.spark.annotation.Since
import org.apache.spark.ml.{Estimator, Model}
import org.apache.spark.ml.linalg.{Vector, VectorUDT}
import org.apache.spark.ml.param.{IntParam, ParamValidators}
Expand All @@ -40,13 +39,11 @@ private[ml] trait LSHParams extends HasInputCol with HasOutputCol {
* higher the dimension is, the lower the false negative rate.
* @group param
*/
@Since("2.1.0")
final val outputDim: IntParam = new IntParam(this, "outputDim", "output dimension, where" +
"increasing dimensionality lowers the false negative rate, and decreasing dimensionality" +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does increasing dimensionality lower the false negative rate?
I think increasing dimensionality should lower false positive rate, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. Since we are implementing OR-amplification, increasing dimensionality lower the false negative rate.

In AND-amplification, increasing dimensionality will lower the false positive rate.

" improves the running performance", ParamValidators.gt(0))

/** @group getParam */
@Since("2.1.0")
final def getOutputDim: Int = $(outputDim)

setDefault(outputDim -> 1, outputCol -> "lshFeatures")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry to switch back on this too, but I realized no other transformers have default outputCol values, so I'd remove the default and make the user set outputCol explicitly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed. Sorry for missing this.

Expand All @@ -56,7 +53,6 @@ private[ml] trait LSHParams extends HasInputCol with HasOutputCol {
* @param schema The schema of the input dataset without [[outputCol]]
* @return A derived schema with [[outputCol]] added
*/
@Since("2.1.0")
protected[this] final def validateAndTransformSchema(schema: StructType): StructType = {
SchemaUtils.appendColumn(schema, $(outputCol), new VectorUDT)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The inputCol cannot be checked here since its type may be algorithm-dependent, but it should be checked in transformSchema or a similar validateAndTransformSchema in the MinHash and RP algorithms below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I did not get it, there is no check for inputCol here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant that transformSchema should validate that inputCol has the correct DataType. That can be done by putting a line in each algorithm's transformSchema.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I will add that.

}
Expand All @@ -73,7 +69,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* The hash function of LSH, mapping a predefined KeyType to a Vector
* @return The mapping of LSH function.
*/
@Since("2.1.0")
protected[ml] val hashFunction: Vector => Vector

/**
Expand All @@ -83,7 +78,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @param y One input vector in the metric space
* @return The distance between x and y
*/
@Since("2.1.0")
protected[ml] def keyDistance(x: Vector, y: Vector): Double

/**
Expand All @@ -93,17 +87,14 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @param y Another hash vector
* @return The distance between hash vectors x and y
*/
@Since("2.1.0")
protected[ml] def hashDistance(x: Vector, y: Vector): Double

@Since("2.1.0")
override def transform(dataset: Dataset[_]): DataFrame = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to copy documentation for overridden methods, unless the docs are specialized for this class

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

transformSchema(dataset.schema, logging = true)
val transformUDF = udf(hashFunction, new VectorUDT)
dataset.withColumn($(outputCol), transformUDF(dataset($(inputCol))))
}

@Since("2.1.0")
override def transformSchema(schema: StructType): StructType = {
validateAndTransformSchema(schema)
}
Expand All @@ -126,7 +117,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @return A dataset containing at most k items closest to the key. A distCol is added to show
* the distance between each row and the key.
*/
@Since("2.1.0")
def approxNearestNeighbors(
dataset: Dataset[_],
key: Vector,
Expand Down Expand Up @@ -168,7 +158,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* Overloaded method for approxNearestNeighbors. Use Single Probing as default way to search
* nearest neighbors and "distCol" as default distCol.
*/
@Since("2.1.0")
def approxNearestNeighbors(
dataset: Dataset[_],
key: Vector,
Expand All @@ -185,7 +174,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @param explodeCols The alias for the exploded columns, must be a seq of two strings.
* @return A dataset containing idCol, inputCol and explodeCols
*/
@Since("2.1.0")
private[this] def processDataset(
dataset: Dataset[_],
inputName: String,
Expand All @@ -211,7 +199,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @param tmpColName A temporary column name which does not conflict with existing columns
* @return
*/
@Since("2.1.0")
private[this] def recreateCol(
dataset: Dataset[_],
colName: String,
Expand All @@ -235,7 +222,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
* @return A joined dataset containing pairs of rows. The original rows are in columns
* "datasetA" and "datasetB", and a distCol is added to show the distance of each pair
*/
@Since("2.1.0")
def approxSimilarityJoin(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This too should document that it transforms data if needed, just like approxNearestNeighbors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

datasetA: Dataset[_],
datasetB: Dataset[_],
Expand Down Expand Up @@ -273,7 +259,6 @@ private[ml] abstract class LSHModel[T <: LSHModel[T]]
/**
* Overloaded method for approxSimilarityJoin. Use "distCol" as default distCol.
*/
@Since("2.1.0")
def approxSimilarityJoin(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default distCol needs to be documented

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scaladoc added.

datasetA: Dataset[_],
datasetB: Dataset[_],
Expand Down Expand Up @@ -302,15 +287,12 @@ private[ml] abstract class LSH[T <: LSHModel[T]]
self: Estimator[T] =>

/** @group setParam */
@Since("2.1.0")
def setInputCol(value: String): this.type = set(inputCol, value)

/** @group setParam */
@Since("2.1.0")
def setOutputCol(value: String): this.type = set(outputCol, value)

/** @group setParam */
@Since("2.1.0")
def setOutputDim(value: Int): this.type = set(outputDim, value)

/**
Expand All @@ -320,10 +302,8 @@ private[ml] abstract class LSH[T <: LSHModel[T]]
* @param inputDim The dimension of the input dataset
* @return A new LSHModel instance without any params
*/
@Since("2.1.0")
protected[this] def createRawLSHModel(inputDim: Int): T

@Since("2.1.0")
override def fit(dataset: Dataset[_]): T = {
transformSchema(dataset.schema, logging = true)
val inputDim = dataset.select(col($(inputCol))).head().get(0).asInstanceOf[Vector].size
Copy link
Member

@jkbradley jkbradley Oct 4, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get[Vector](0)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd call transformSchema here before extracting inputDim

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Expand Down
12 changes: 9 additions & 3 deletions mllib/src/main/scala/org/apache/spark/ml/feature/MinHash.scala
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,14 @@ import org.apache.spark.sql.types.StructType

/**
* :: Experimental ::
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to mark a private class Experimental

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

* Model produced by [[MinHash]]
* Model produced by [[MinHash]], where multiple hash functions are stored. Each hash function is
* a perfect hash function:
* g_i(x) = (x * k_i mod prime) mod numEntries
* where c_i is the i-th coefficient
*
* Reference:
* https://en.wikipedia.org/wiki/Perfect_hash_function
*
* @param numEntries The number of entries of the hash functions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The doc could be clearer here. If I did not read the code, I might not know what "entries" are.

One good way to explain this would be to just state what the hash function is here.

* @param randCoefficients An array of random coefficients, each used by one hash function.
*/
Expand Down Expand Up @@ -117,7 +124,7 @@ class MinHash(override val uid: String) extends LSH[MinHashModel] with HasSeed {
@Since("2.1.0")
override protected[ml] def createRawLSHModel(inputDim: Int): MinHashModel = {
require(inputDim <= MinHash.prime / 2,
"The input vector dimension is too large for MinHash to handle.")
s"The input vector dimension $inputDim exceeds the threshold ${MinHash.prime / 2}.")
val rand = new Random($(seed))
val numEntry = inputDim * 2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could overflow. Use inputDim < prime / 2 + 1 instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

val randCoofs: Array[Int] = Array.fill($(outputDim))(1 + rand.nextInt(MinHash.prime - 1))
Expand Down Expand Up @@ -158,7 +165,6 @@ object MinHashModel extends MLReadable[MinHashModel] {

override protected def saveImpl(path: String): Unit = {
DefaultParamsWriter.saveMetadata(instance, path, sc)
// Save model data: pi, theta
val data = Data(instance.numEntries, instance.randCoefficients)
val dataPath = new Path(path, "data").toString
sparkSession.createDataFrame(Seq(data)).repartition(1).write.parquet(dataPath)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,13 +44,11 @@ private[ml] trait RandomProjectionParams extends Params {
* reasonable value
* @group param
*/
@Since("2.1.0")
val bucketLength: DoubleParam = new DoubleParam(this, "bucketLength",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add Scala doc for bucketLength. Some guidance on reasonable value ranges would be good. E.g., "If input vectors have unit norm, then ...."

In doc, put bucketLength in @group param.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

"the length of each hash bucket, a larger bucket lowers the false negative rate.",
ParamValidators.gt(0))

/** @group getParam */
@Since("2.1.0")
final def getBucketLength: Double = $(bucketLength)
}

Expand Down Expand Up @@ -180,7 +178,6 @@ object RandomProjectionModel extends MLReadable[RandomProjectionModel] {

override protected def saveImpl(path: String): Unit = {
DefaultParamsWriter.saveMetadata(instance, path, sc)
// Save model data: pi, theta
val numRows = instance.randUnitVectors.length
require(numRows > 0)
val numCols = instance.randUnitVectors.head.size
Expand Down