From 6df0dcb461bab81e52313de25b3652cb8bfed79b Mon Sep 17 00:00:00 2001 From: Liquan Pei Date: Sun, 17 Aug 2014 15:01:37 -0700 Subject: [PATCH 1/3] add Word2Vec documentation --- docs/mllib-feature-extraction.md | 54 ++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/docs/mllib-feature-extraction.md b/docs/mllib-feature-extraction.md index 21453cb9cd8c..8847a4592865 100644 --- a/docs/mllib-feature-extraction.md +++ b/docs/mllib-feature-extraction.md @@ -9,4 +9,58 @@ displayTitle: MLlib - Feature Extraction ## Word2Vec +Wor2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity recognition, disambiguation, parsing, tagging and machine translation. + +### Model +In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the skip-gram model is to maximize the average log-likelihood +`\[ +\frac{1}{T} \sum_{t = 1}^{T}\sum_{j=-k}^{j=k} \log p(w_{t+j} | w_t) +\]` +where $k$ is the size of the training window. + +In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are vector representations of $w$ as word and context respectively. The probability of correctly predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is +`\[ +p(w_i | w_j ) = \frac{\exp(u_{w_i}^{\top}v_{w_j})}{\sum_{l=1}^{V} \exp(u_l^{\top}v_{w_j})} +\]` +where $V$ is the vocabulary size. + +The skip-gram model with softmax is expensive because the cost of computing $\log p(w_i | w_j)$ +is proportional to $V$, which can be easily in order of millions. To speed up Word2Vec training, we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to +$O(\log(V))$ + +### Example + +The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]` and then construct a `Word2Vec` instance with specified parameters. Then we fit a Word2Vec model with the input data. Finally, we display the top 40 similar words to the specified word. + +
+
+{% highlight scala %} +import org.apache.spark._ +import org.apache.spark.rdd._ +import org.apache.spark.SparkContext._ +import org.apache.spark.mllib.feature.Word2Vec + +val input = sc.textFile().map(line => line.split(" ").toSeq) +val size = 100 +val startingAlpha = 0.025 +val numPartitions = 1 +val numIterations = 1 + +val word2vec = new Word2Vec() + .setVectorSize(size) + .setSeed(42L) + .setNumPartitions(numPartitions) + .setNumIterations(numIterations) + +val model = word2vec.fit(input) + +val vec = model.findSynonyms("china", 40) + +for((word, cosineSimilarity) <- vec) { + println(word + " " + cosineSimilarity.toString) +} +{% endhighlight %} +
+
+ ## TFIDF From 8d7458fabd5794d7e6fe8c59ea0ed8ee1ce81f2d Mon Sep 17 00:00:00 2001 From: Liquan Pei Date: Sun, 17 Aug 2014 22:13:11 -0700 Subject: [PATCH 2/3] code reformat --- docs/mllib-feature-extraction.md | 43 +++++++++++++++++++------------- 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/docs/mllib-feature-extraction.md b/docs/mllib-feature-extraction.md index 8847a4592865..a9688432f2a2 100644 --- a/docs/mllib-feature-extraction.md +++ b/docs/mllib-feature-extraction.md @@ -9,28 +9,43 @@ displayTitle: MLlib - Feature Extraction ## Word2Vec -Wor2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity recognition, disambiguation, parsing, tagging and machine translation. +Wor2Vec computes distributed vector representation of words. The main advantage of the distributed +representations is that similar words are close in the vector space, which makes generalization to +novel patterns easier and model estimation more robust. Distributed vector representation is +showed to be useful in many natural language processing applications such as named entity +recognition, disambiguation, parsing, tagging and machine translation. ### Model -In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is to learn word vector representations that are good at predicting its context in the same sentence. Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the skip-gram model is to maximize the average log-likelihood + +In our implementation of Word2Vec, we used skip-gram model. The training objective of skip-gram is +to learn word vector representations that are good at predicting its context in the same sentence. +Mathematically, given a sequence of training words `$w_1, w_2, \dots, w_T$`, the objective of the +skip-gram model is to maximize the average log-likelihood `\[ \frac{1}{T} \sum_{t = 1}^{T}\sum_{j=-k}^{j=k} \log p(w_{t+j} | w_t) \]` where $k$ is the size of the training window. -In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are vector representations of $w$ as word and context respectively. The probability of correctly predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is +In the skip-gram model, every word $w$ is associated with two vectors $u_w$ and $v_w$ which are +vector representations of $w$ as word and context respectively. The probability of correctly +predicting word $w_i$ given word $w_j$ is determined by the softmax model, which is `\[ p(w_i | w_j ) = \frac{\exp(u_{w_i}^{\top}v_{w_j})}{\sum_{l=1}^{V} \exp(u_l^{\top}v_{w_j})} \]` where $V$ is the vocabulary size. The skip-gram model with softmax is expensive because the cost of computing $\log p(w_i | w_j)$ -is proportional to $V$, which can be easily in order of millions. To speed up Word2Vec training, we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to +is proportional to $V$, which can be easily in order of millions. To speed up training of Word2Vec, +we used hierarchical softmax, which reduced the complexity of computing of $\log p(w_i | w_j)$ to $O(\log(V))$ ### Example -The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]` and then construct a `Word2Vec` instance with specified parameters. Then we fit a Word2Vec model with the input data. Finally, we display the top 40 similar words to the specified word. +The example below demonstrates how to load a text file, parse it as an RDD of `Seq[String]`, +construct a `Word2Vec` instance and then fit a `Word2VecModel` with the input data. Finally, +we display the top 40 synonyms of the specified word. To run the example, first download +the [text8](http://mattmahoney.net/dc/text8.zip) data and extract it to your preferred directory. +Here we assume the extracted file is `text8` and in same directory as you run the spark shell.
@@ -40,27 +55,19 @@ import org.apache.spark.rdd._ import org.apache.spark.SparkContext._ import org.apache.spark.mllib.feature.Word2Vec -val input = sc.textFile().map(line => line.split(" ").toSeq) -val size = 100 -val startingAlpha = 0.025 -val numPartitions = 1 -val numIterations = 1 +val input = sc.textFile("text8").map(line => line.split(" ").toSeq) val word2vec = new Word2Vec() - .setVectorSize(size) - .setSeed(42L) - .setNumPartitions(numPartitions) - .setNumIterations(numIterations) val model = word2vec.fit(input) -val vec = model.findSynonyms("china", 40) +val synonyms = model.findSynonyms("china", 40) -for((word, cosineSimilarity) <- vec) { - println(word + " " + cosineSimilarity.toString) +for((synonym, cosineSimilarity) <- synonyms) { + println(synonym + " " + cosineSimilarity.toString) } {% endhighlight %}
-## TFIDF +## TFIDF \ No newline at end of file From 4ff11d48ba8872cec14dd1fbc79b090e85f289b1 Mon Sep 17 00:00:00 2001 From: Liquan Pei Date: Sun, 17 Aug 2014 22:57:09 -0700 Subject: [PATCH 3/3] minor fix --- docs/mllib-feature-extraction.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/mllib-feature-extraction.md b/docs/mllib-feature-extraction.md index a9688432f2a2..4b3cb715c58c 100644 --- a/docs/mllib-feature-extraction.md +++ b/docs/mllib-feature-extraction.md @@ -9,7 +9,7 @@ displayTitle: MLlib - Feature Extraction ## Word2Vec -Wor2Vec computes distributed vector representation of words. The main advantage of the distributed +Word2Vec computes distributed vector representation of words. The main advantage of the distributed representations is that similar words are close in the vector space, which makes generalization to novel patterns easier and model estimation more robust. Distributed vector representation is showed to be useful in many natural language processing applications such as named entity @@ -64,7 +64,7 @@ val model = word2vec.fit(input) val synonyms = model.findSynonyms("china", 40) for((synonym, cosineSimilarity) <- synonyms) { - println(synonym + " " + cosineSimilarity.toString) + println(s"$synonym $cosineSimilarity") } {% endhighlight %}