Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
23c8846
[STREAMING][MINOR] Fix typo in function name of StateImpl
jerryshao Dec 15, 2015
80d2617
Update branch-1.6 for 1.6.0 release
marmbrus Dec 15, 2015
00a39d9
Preparing Spark release v1.6.0-rc3
pwendell Dec 15, 2015
08aa3b4
Preparing development version 1.6.0-SNAPSHOT
pwendell Dec 15, 2015
9e4ac56
[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after cal…
tedyu Dec 16, 2015
2c324d3
[SPARK-12351][MESOS] Add documentation about submitting Spark with me…
tnachen Dec 16, 2015
8e9a600
[SPARK-9886][CORE] Fix to use ShutdownHookManager in
naveenminchu Dec 16, 2015
93095eb
[SPARK-12062][CORE] Change Master to asyc rebuild UI when application…
BryanCutler Dec 16, 2015
fb08f7b
[SPARK-10477][SQL] using DSL in ColumnPruningSuite to improve readabi…
cloud-fan Dec 16, 2015
135a5ee
removed some maven-jar-plugin
markhamstra Dec 16, 2015
9a6494a
Merge branch 'branch-1.6' of github.com:apache/spark into csd-1.6
markhamstra Dec 16, 2015
a2d584e
[SPARK-12324][MLLIB][DOC] Fixes the sidebar in the ML documentation
thunterdb Dec 16, 2015
ac0e2ea
[SPARK-12310][SPARKR] Add write.json and write.parquet for SparkR
yanboliang Dec 16, 2015
16edd93
[SPARK-12215][ML][DOC] User guide section for KMeans in spark.ml
yu-iskw Dec 16, 2015
f815127
[SPARK-12318][SPARKR] Save mode in SparkR should be error by default
zjffdu Dec 16, 2015
e5b8571
[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs wit…
tnachen Dec 16, 2015
e1adf6d
[SPARK-6518][MLLIB][EXAMPLE][DOC] Add example code and user guide for…
yu-iskw Dec 16, 2015
168c89e
Preparing Spark release v1.6.0-rc3
pwendell Dec 16, 2015
aee88eb
Preparing development version 1.6.0-SNAPSHOT
pwendell Dec 16, 2015
dffa610
[SPARK-11608][MLLIB][DOC] Added migration guide for MLlib 1.6
jkbradley Dec 16, 2015
04e868b
[SPARK-12364][ML][SPARKR] Add ML example for SparkR
yanboliang Dec 16, 2015
d020431
Merge branch 'branch-1.6' of github.com:apache/spark into csd-1.6
markhamstra Dec 16, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
[SPARK-6518][MLLIB][EXAMPLE][DOC] Add example code and user guide for…
… bisecting k-means

This PR includes only an example code in order to finish it quickly.
I'll send another PR for the docs soon.

Author: Yu ISHIKAWA <[email protected]>

Closes apache#9952 from yu-iskw/SPARK-6518.

(cherry picked from commit 7b6dc29)
Signed-off-by: Joseph K. Bradley <[email protected]>
  • Loading branch information
yu-iskw authored and jkbradley committed Dec 16, 2015
commit e1adf6d7d1c755fb16a0030e66ce9cff348c3de8
35 changes: 35 additions & 0 deletions docs/mllib-clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -718,6 +718,41 @@ sameModel = LDAModel.load(sc, "myModelPath")

</div>

## Bisecting k-means

Bisecting K-means can often be much faster than regular K-means, but it will generally produce a different clustering.

Bisecting k-means is a kind of [hierarchical clustering](https://en.wikipedia.org/wiki/Hierarchical_clustering).
Hierarchical clustering is one of the most commonly used method of cluster analysis which seeks to build a hierarchy of clusters.
Strategies for hierarchical clustering generally fall into two types:

- Agglomerative: This is a "bottom up" approach: each observation starts in its own cluster, and pairs of clusters are merged as one moves up the hierarchy.
- Divisive: This is a "top down" approach: all observations start in one cluster, and splits are performed recursively as one moves down the hierarchy.

Bisecting k-means algorithm is a kind of divisive algorithms.
The implementation in MLlib has the following parameters:

* *k*: the desired number of leaf clusters (default: 4). The actual number could be smaller if there are no divisible leaf clusters.
* *maxIterations*: the max number of k-means iterations to split clusters (default: 20)
* *minDivisibleClusterSize*: the minimum number of points (if >= 1.0) or the minimum proportion of points (if < 1.0) of a divisible cluster (default: 1)
* *seed*: a random seed (default: hash value of the class name)

**Examples**

<div class="codetabs">
<div data-lang="scala" markdown="1">
Refer to the [`BisectingKMeans` Scala docs](api/scala/index.html#org.apache.spark.mllib.clustering.BisectingKMeans) and [`BisectingKMeansModel` Scala docs](api/scala/index.html#org.apache.spark.mllib.clustering.BisectingKMeansModel) for details on the API.

{% include_example scala/org/apache/spark/examples/mllib/BisectingKMeansExample.scala %}
</div>

<div data-lang="java" markdown="1">
Refer to the [`BisectingKMeans` Java docs](api/java/org/apache/spark/mllib/clustering/BisectingKMeans.html) and [`BisectingKMeansModel` Java docs](api/java/org/apache/spark/mllib/clustering/BisectingKMeansModel.html) for details on the API.

{% include_example java/org/apache/spark/examples/mllib/JavaBisectingKMeansExample.java %}
</div>
</div>

## Streaming k-means

When data arrive in a stream, we may want to estimate clusters dynamically,
Expand Down
1 change: 1 addition & 0 deletions docs/mllib-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ We list major functionality from both below, with links to detailed guides.
* [Gaussian mixture](mllib-clustering.html#gaussian-mixture)
* [power iteration clustering (PIC)](mllib-clustering.html#power-iteration-clustering-pic)
* [latent Dirichlet allocation (LDA)](mllib-clustering.html#latent-dirichlet-allocation-lda)
* [bisecting k-means](mllib-clustering.html#bisecting-kmeans)
* [streaming k-means](mllib-clustering.html#streaming-k-means)
* [Dimensionality reduction](mllib-dimensionality-reduction.html)
* [singular value decomposition (SVD)](mllib-dimensionality-reduction.html#singular-value-decomposition-svd)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.mllib;

import java.util.ArrayList;

// $example on$
import com.google.common.collect.Lists;
// $example off$
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
// $example on$
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.mllib.clustering.BisectingKMeans;
import org.apache.spark.mllib.clustering.BisectingKMeansModel;
import org.apache.spark.mllib.linalg.Vector;
import org.apache.spark.mllib.linalg.Vectors;
// $example off$

/**
* Java example for graph clustering using power iteration clustering (PIC).
*/
public class JavaBisectingKMeansExample {
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf().setAppName("JavaBisectingKMeansExample");
JavaSparkContext sc = new JavaSparkContext(sparkConf);

// $example on$
ArrayList<Vector> localData = Lists.newArrayList(
Vectors.dense(0.1, 0.1), Vectors.dense(0.3, 0.3),
Vectors.dense(10.1, 10.1), Vectors.dense(10.3, 10.3),
Vectors.dense(20.1, 20.1), Vectors.dense(20.3, 20.3),
Vectors.dense(30.1, 30.1), Vectors.dense(30.3, 30.3)
);
JavaRDD<Vector> data = sc.parallelize(localData, 2);

BisectingKMeans bkm = new BisectingKMeans()
.setK(4);
BisectingKMeansModel model = bkm.run(data);

System.out.println("Compute Cost: " + model.computeCost(data));
for (Vector center: model.clusterCenters()) {
System.out.println("");
}
Vector[] clusterCenters = model.clusterCenters();
for (int i = 0; i < clusterCenters.length; i++) {
Vector clusterCenter = clusterCenters[i];
System.out.println("Cluster Center " + i + ": " + clusterCenter);
}
// $example off$

sc.stop();
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.examples.mllib

// scalastyle:off println
// $example on$
import org.apache.spark.mllib.clustering.BisectingKMeans
import org.apache.spark.mllib.linalg.{Vector, Vectors}
// $example off$
import org.apache.spark.{SparkConf, SparkContext}

/**
* An example demonstrating a bisecting k-means clustering in spark.mllib.
*
* Run with
* {{{
* bin/run-example mllib.BisectingKMeansExample
* }}}
*/
object BisectingKMeansExample {

def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("mllib.BisectingKMeansExample")
val sc = new SparkContext(sparkConf)

// $example on$
// Loads and parses data
def parse(line: String): Vector = Vectors.dense(line.split(" ").map(_.toDouble))
val data = sc.textFile("data/mllib/kmeans_data.txt").map(parse).cache()

// Clustering the data into 6 clusters by BisectingKMeans.
val bkm = new BisectingKMeans().setK(6)
val model = bkm.run(data)

// Show the compute cost and the cluster centers
println(s"Compute Cost: ${model.computeCost(data)}")
model.clusterCenters.zipWithIndex.foreach { case (center, idx) =>
println(s"Cluster Center ${idx}: ${center}")
}
// $example off$

sc.stop()
}
}
// scalastyle:on println