-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-20114][ML] spark.ml parity for sequential pattern mining - PrefixSpan #20973
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -21,8 +21,7 @@ import org.apache.spark.annotation.{Experimental, Since} | |
| import org.apache.spark.mllib.fpm.{PrefixSpan => mllibPrefixSpan} | ||
| import org.apache.spark.sql.{DataFrame, Dataset, Row} | ||
| import org.apache.spark.sql.functions.col | ||
| import org.apache.spark.sql.types.{LongType, StructField, StructType} | ||
| import org.apache.spark.storage.StorageLevel | ||
| import org.apache.spark.sql.types.{ArrayType, LongType, StructField, StructType} | ||
|
|
||
| /** | ||
| * :: Experimental :: | ||
|
|
@@ -44,26 +43,37 @@ object PrefixSpan { | |
| * | ||
| * @param dataset A dataset or a dataframe containing a sequence column which is | ||
| * {{{Seq[Seq[_]]}}} type | ||
| * @param sequenceCol the name of the sequence column in dataset | ||
| * @param sequenceCol the name of the sequence column in dataset, rows with nulls in this column | ||
| * are ignored | ||
| * @param minSupport the minimal support level of the sequential pattern, any pattern that | ||
| * appears more than (minSupport * size-of-the-dataset) times will be output | ||
| * (default: `0.1`). | ||
| * @param maxPatternLength the maximal length of the sequential pattern, any pattern that appears | ||
| * less than maxPatternLength will be output (default: `10`). | ||
| * (recommended value: `0.1`). | ||
| * @param maxPatternLength the maximal length of the sequential pattern | ||
| * (recommended value: `10`). | ||
| * @param maxLocalProjDBSize The maximum number of items (including delimiters used in the | ||
| * internal storage format) allowed in a projected database before | ||
| * local processing. If a projected database exceeds this size, another | ||
| * iteration of distributed prefix growth is run (default: `32000000`). | ||
| * @return A dataframe that contains columns of sequence and corresponding frequency. | ||
| * iteration of distributed prefix growth is run | ||
| * (recommended value: `32000000`). | ||
| * @return A `DataFrame` that contains columns of sequence and corresponding frequency. | ||
| * The schema of it will be: | ||
| * - `sequence: Seq[Seq[T]]` (T is the item type) | ||
| * - `frequency: Long` | ||
| */ | ||
| @Since("2.4.0") | ||
| def findFrequentSequentPatterns( | ||
| def findFrequentSequentialPatterns( | ||
| dataset: Dataset[_], | ||
| sequenceCol: String, | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @WeichenXu123 @jkbradley The static method doesn't scale with parameters. If we add a new param, we have to keep the old one for binary compatibility. Why not using setters? I think we only need to avoid using
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree with using setters. @jkbradley What do you think of it ?
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree in general, but I don’t think it’s a big deal for PrefixSpan. I think of our current static method as a temporary workaround until we do the work to build a Model which can make meaningful predictions. This will mean that further PrefixSpan improvements may be blocked on this Model work, but I think that’s OK since predictions should be the next priority for PrefixSpan. Once we have a Model, I recommend we deprecate the current static method. I'm also OK with changing this to use setters, but then we should name it something else so that we can replace it with an Estimator + Model pair later on. I'd suggest "PrefixSpanBuilder."
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It should be easier to keep the final class PrefixSpan(override val uid: String) extends Params {
// param, setters, getters
def findFrequentSequentialPatterns(dataset: Dataset[_]): DataFrame
}Later we can add
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this way
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Adding
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh, I think you're right @mengxr . That approach sounds good.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @WeichenXu123 Do you have time to send a PR to update this API?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure. Will update soon! |
||
| minSupport: Double = 0.1, | ||
| maxPatternLength: Int = 10, | ||
| maxLocalProjDBSize: Long = 32000000L): DataFrame = { | ||
| val handlePersistence = dataset.storageLevel == StorageLevel.NONE | ||
| minSupport: Double, | ||
| maxPatternLength: Int, | ||
| maxLocalProjDBSize: Long): DataFrame = { | ||
|
|
||
| val inputType = dataset.schema(sequenceCol).dataType | ||
| require(inputType.isInstanceOf[ArrayType] && | ||
| inputType.asInstanceOf[ArrayType].elementType.isInstanceOf[ArrayType], | ||
| s"The input column must be ArrayType and the array element type must also be ArrayType, " + | ||
| s"but got $inputType.") | ||
|
|
||
|
|
||
| val data = dataset.select(sequenceCol) | ||
|
||
| val sequences = data.where(col(sequenceCol).isNotNull).rdd | ||
|
|
@@ -73,18 +83,13 @@ object PrefixSpan { | |
| .setMinSupport(minSupport) | ||
| .setMaxPatternLength(maxPatternLength) | ||
| .setMaxLocalProjDBSize(maxLocalProjDBSize) | ||
| if (handlePersistence) { | ||
| sequences.persist(StorageLevel.MEMORY_AND_DISK) | ||
| } | ||
|
|
||
| val rows = mllibPrefixSpan.run(sequences).freqSequences.map(f => Row(f.sequence, f.freq)) | ||
| val schema = StructType(Seq( | ||
| StructField("sequence", dataset.schema(sequenceCol).dataType, nullable = false), | ||
| StructField("freq", LongType, nullable = false))) | ||
| StructField("frequency", LongType, nullable = false))) | ||
| val freqSequences = dataset.sparkSession.createDataFrame(rows, schema) | ||
|
|
||
| if (handlePersistence) { | ||
| sequences.unpersist() | ||
| } | ||
| freqSequences | ||
| } | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had asked for this change to "frequency" from "freq," but I belatedly realized that this conflicts with the existing FPGrowth API, which uses "freq." It would be best to maintain consistency. Would you mind reverting to "freq?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure!