Skip to content
Prev Previous commit
Next Next commit
review comments
  • Loading branch information
foxish committed Dec 20, 2017
commit 9594462e20123f061cb88e7cf813ade5e77d534e
7 changes: 3 additions & 4 deletions docs/running-on-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ title: Running Spark on Kubernetes
* This will become a table of contents (this text will be scraped).
{:toc}

Spark can run on clusters managed by [Kubernetes](https://kubernetes.io). This feature makes use of the new experimental native
Spark can run on clusters managed by [Kubernetes](https://kubernetes.io). This feature makes use of native
Kubernetes scheduler that has been added to Spark.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove experimental ? I think there are other references (a grep should catch them all) and we can remove them all now ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. The other references have been removed. Couldn't find any others.


# Prerequisites
Expand Down Expand Up @@ -71,15 +71,14 @@ To launch Spark Pi in cluster mode,

{% highlight bash %}
$ bin/spark-submit \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--deploy-mode cluster \
--class org.apache.spark.examples.SparkPi \
--master k8s://https://<k8s-apiserver-host>:<k8s-apiserver-port> \
--conf spark.kubernetes.namespace=default \
--conf spark.executor.instances=5 \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--num-executors instead ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is specified currently as a YARN-only option in spark-submit. Does it make sense to move it out to suit both K8s and YARN?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, I did not realize mesos did not honour it and it was yarn only option !
It might make sense for k8s also to support it too (since it does not right now, my proposal for doc change would be incorrect) .. what do you think @foxish ?

--conf spark.app.name=spark-pi \
--conf spark.kubernetes.driver.docker.image=<driver-image> \
--conf spark.kubernetes.executor.docker.image=<executor-image> \
local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar
local:///path/to/examples.jar
{% endhighlight %}

The Spark master, specified either via passing the `--master` command line argument to `spark-submit` or by setting
Expand Down