Skip to content
This repository was archived by the owner on Jan 9, 2020. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Update versioning
  • Loading branch information
foxish committed Mar 8, 2017
commit 37239f36cc3e6e4f39ee86f796375661b81fb3a0
3 changes: 0 additions & 3 deletions docs/running-on-kubernetes-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,3 @@ A Kubernetes cluster may be brought up on different cloud providers or on premis

Known issues:
* If you face OAuth token expiry errors when you run spark-submit, it is likely because the token needs to be refreshed. The easiest way to fix this is to run any `kubectl` command, say, `kubectl version` and then retry your submission.



12 changes: 6 additions & 6 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,11 @@ If you wish to use pre-built docker images, you may use the images published in
<tr><th>Component</th><th>Image</th></tr>
<tr>
<td>Spark Driver Image</td>
<td><code>kubespark/spark-driver:0.1.0-alpha.1</code></td>
<td><code>kubespark/spark-driver:v2.1.0-k8s-support-0.1.0-alpha.1</code></td>
</tr>
<tr>
<td>Spark Executor Image</td>
<td><code>kubespark/spark-executor:0.1.0-alpha.1</code></td>
<td><code>kubespark/spark-executor:v2.1.0-k8s-support-0.1.0-alpha.1</code></td>
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these don't match the tags I see at https://hub.docker.com/r/kubespark/spark-executor/tags/

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pushed the tag an hour ago and it seems to be in there. Will rebuild and update after the rebase also.

</tr>
</table>

Expand Down Expand Up @@ -57,8 +57,8 @@ are set up as described above:
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:0.1.0-alpha.1 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:0.1.0-alpha.1 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-k8s-support-0.1.0-alpha.1 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-k8s-support-0.1.0-alpha.1 \
examples/jars/spark_examples_2.11-2.2.0.jar

The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
Expand Down Expand Up @@ -108,8 +108,8 @@ If our local proxy were listening on port 8001, we would have our submission loo
--kubernetes-namespace default \
--conf spark.executor.instances=5 \
--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:0.1.0-alpha.1 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:0.1.0-alpha.1 \
--conf spark.kubernetes.driver.docker.image=kubespark/spark-driver:v2.1.0-k8s-support-0.1.0-alpha.1 \
--conf spark.kubernetes.executor.docker.image=kubespark/spark-executor:v2.1.0-k8s-support-0.1.0-alpha.1 \
examples/jars/spark_examples_2.11-2.2.0.jar

Communication between Spark and Kubernetes clusters is performed using the fabric8 kubernetes-client library.
Expand Down