Skip to content

Commit db55adb

Browse files
author
Suzanne Scala
authored
Merge pull request apache#93 from mesosphere/ss/typos-and-formatting
little fixes
2 parents 190d00a + ac75d1b commit db55adb

File tree

2 files changed

+74
-94
lines changed

2 files changed

+74
-94
lines changed

docs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,5 +60,5 @@ dispatcher and the history server
6060
[4]: https://docs.mesosphere.com/1.8/usage/service-guides/hdfs/
6161
[5]: https://docs.mesosphere.com/1.8/usage/service-guides/kafka/
6262
[6]: https://zeppelin.incubator.apache.org/
63-
[17]: https://github.com/mesopshere/spark
64-
[18]: https://github.com/mesopshere/spark-build
63+
[17]: https://github.com/mesosphere/spark
64+
[18]: https://github.com/mesosphere/spark-build

docs/security.md

Lines changed: 72 additions & 92 deletions
Original file line numberDiff line numberDiff line change
@@ -23,102 +23,89 @@ enterprise: 'no'
2323

2424
## Authentication
2525

26-
When running in
27-
[DC/OS strict security mode](https://docs.mesosphere.com/latest/administration/id-and-access-mgt/),
28-
Both the dispatcher and jobs must authenticate to Mesos using a [DC/OS
29-
Service Account](https://docs.mesosphere.com/1.8/administration/id-and-access-mgt/service-auth/).
26+
When running in [DC/OS strict security mode](https://docs.mesosphere.com/latest/administration/id-and-access-mgt/), both the dispatcher and jobs must authenticate to Mesos using a [DC/OS Service Account](https://docs.mesosphere.com/1.8/administration/id-and-access-mgt/service-auth/).
27+
3028
Follow these instructions to authenticate in strict mode:
3129

3230
1. Create a Service Account
3331

34-
Instructions
35-
[here](https://docs.mesosphere.com/1.8/administration/id-and-access-mgt/service-auth/universe-service-auth/).
36-
37-
2. Assign Permissions
38-
39-
First, allow Spark to run tasks as root:
40-
41-
```
42-
$ curl -k -L -X PUT \
43-
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
44-
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root" \
45-
-d '{"description":"Allows root to execute tasks"}' \
46-
-H 'Content-Type: application/json'
47-
48-
$ curl -k -L -X PUT \
49-
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
50-
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root/users/${SERVICE_ACCOUNT_NAME}/create"
51-
```
52-
53-
Now you must allow Spark to register under the desired role. This is
54-
the value used for `service.role` when installing Spark (default:
55-
`*`):
56-
57-
```
58-
$ export ROLE=<service.role value>
59-
$ curl -k -L -X PUT \
60-
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
61-
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:${ROLE}" \
62-
-d '{"description":"Allows ${ROLE} to register as a framework with the Mesos master"}' \
63-
-H 'Content-Type: application/json'
64-
65-
$ curl -k -L -X PUT \
66-
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
67-
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:${ROLE}/users/${SERVICE_ACCOUNT_NAME}/create"
68-
```
69-
70-
3. Install Spark
71-
72-
```
73-
$ dcos package install spark --options=config.json
74-
```
75-
76-
Where `config.json` contains the following JSON. Replace
77-
`<principal>` with the name of your service account, and
78-
`<secret_name>` with the name of the DC/OS secret containing your
79-
service account's private key. These values were created in Step #1
80-
above.
81-
82-
```
83-
{
84-
"service": {
85-
"principal": "<principal>",
86-
"user": "nobody"
87-
},
88-
"security": {
89-
"mesos": {
90-
"authentication": {
91-
"secret_name": "<secret_name>"
32+
Instructions [here](https://docs.mesosphere.com/1.8/administration/id-and-access-mgt/service-auth/universe-service-auth/).
33+
34+
1. Assign Permissions
35+
36+
First, allow Spark to run tasks as root:
37+
38+
```
39+
$ curl -k -L -X PUT \
40+
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
41+
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root" \
42+
-d '{"description":"Allows root to execute tasks"}' \
43+
-H 'Content-Type: application/json'
44+
45+
$ curl -k -L -X PUT \
46+
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
47+
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:task:user:root/users/${SERVICE_ACCOUNT_NAME}/create"
48+
```
49+
50+
Now you must allow Spark to register under the desired role. This is the value used for `service.role` when installing Spark (default: `*`):
51+
52+
```
53+
$ export ROLE=<service.role value>
54+
$ curl -k -L -X PUT \
55+
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
56+
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:${ROLE}" \
57+
-d '{"description":"Allows ${ROLE} to register as a framework with the Mesos master"}' \
58+
-H 'Content-Type: application/json'
59+
60+
$ curl -k -L -X PUT \
61+
-H "Authorization: token=$(dcos config show core.dcos_acs_token)" \
62+
"$(dcos config show core.dcos_url)/acs/api/v1/acls/dcos:mesos:master:framework:role:${ROLE}/users/${SERVICE_ACCOUNT_NAME}/create"
63+
```
64+
65+
1. Install Spark
66+
67+
```
68+
$ dcos package install spark --options=config.json
69+
```
70+
71+
Where `config.json` contains the following JSON. Replace `<principal>` with the name of your service account, and `<secret_name>` with the name of the DC/OS secret containing your service account's private key. These values were created in Step #1 above.
72+
73+
```
74+
{
75+
"service": {
76+
"principal": "<principal>",
77+
"user": "nobody"
78+
},
79+
"security": {
80+
"mesos": {
81+
"authentication": {
82+
"secret_name": "<secret_name>"
83+
}
9284
}
9385
}
9486
}
95-
}
96-
```
97-
98-
4. Submit a Job
87+
```
9988
100-
We've now installed the Spark Dispatcher, which is authenticating
101-
itself to the Mesos master. Spark jobs are also frameworks which must
102-
authenticate. The dispatcher will pass the secret along to the jobs,
103-
so all that's left to do is configure our jobs to use DC/OS authentication:
89+
1. Submit a Job
10490
105-
```
106-
$ PROPS="-Dspark.mesos.driverEnv.MESOS_MODULES=file:///opt/mesosphere/etc/mesos-scheduler-modules/dcos_authenticatee_module.json "
107-
$ PROPS+="-Dspark.mesos.driverEnv.MESOS_AUTHENTICATEE=com_mesosphere_dcos_ClassicRPCAuthenticatee "
108-
$ PROPS+="-Dspark.mesos.principal=<principal>"
109-
$ dcos spark run --submit-args="${PROPS} ..."
110-
```
91+
We've now installed the Spark Dispatcher, which is authenticating itself to the Mesos master. Spark jobs are also frameworks which must authenticate. The dispatcher will pass the secret along to the jobs, so all that's left to do is configure our jobs to use DC/OS authentication:
92+
93+
```
94+
$ PROPS="-Dspark.mesos.driverEnv.MESOS_MODULES=file:///opt/mesosphere/etc/mesos-scheduler-modules/dcos_authenticatee_module.json "
95+
$ PROPS+="-Dspark.mesos.driverEnv.MESOS_AUTHENTICATEE=com_mesosphere_dcos_ClassicRPCAuthenticatee "
96+
$ PROPS+="-Dspark.mesos.principal=<principal>"
97+
$ dcos spark run --submit-args="${PROPS} ..."
98+
```
11199
112100
# Spark SSL
113101
114102
SSL support in DC/OS Spark encrypts the following channels:
115103
116-
* From the [DC/OS admin router][11] to the dispatcher
117-
* From the dispatcher to the drivers
118-
* From the drivers to their executors
104+
* From the [DC/OS admin router][11] to the dispatcher.
105+
* From the dispatcher to the drivers.
106+
* From the drivers to their executors.
119107
120-
There are a number of configuration variables relevant to SSL setup.
121-
List them with the following command:
108+
There are a number of configuration variables relevant to SSL setup. List them with the following command:
122109
123110
$ dcos package describe spark --config
124111
@@ -156,20 +143,15 @@ There are only two required variables:
156143
</tr>
157144
</table>
158145
159-
The Java keystore (and, optionally, truststore) are created using the
160-
[Java keytool][12]. The keystore must contain one private key and its
161-
signed public key. The truststore is optional and might contain a
162-
self-signed root-ca certificate that is explicitly trusted by Java.
146+
The Java keystore (and, optionally, truststore) are created using the [Java keytool][12]. The keystore must contain one private key and its signed public key. The truststore is optional and might contain a self-signed root-ca certificate that is explicitly trusted by Java.
163147
164148
Both stores must be base64 encoded, e.g. by:
165149
166150
$ cat keystore | base64 /u3+7QAAAAIAAAACAAAAAgA...
167151
168-
**Note:** The base64 string of the keystore will probably be much
169-
longer than the snippet above, spanning 50 lines or so.
152+
**Note:** The base64 string of the keystore will probably be much longer than the snippet above, spanning 50 lines or so.
170153
171-
With this and the password `secret` for the keystore and the private
172-
key, your JSON options file will look like this:
154+
With this and the password `secret` for the keystore and the private key, your JSON options file will look like this:
173155
174156
{
175157
"security": {
@@ -186,9 +168,7 @@ Install Spark with your custom configuration:
186168
187169
$ dcos package install --options=options.json spark
188170
189-
In addition to the described configuration, make sure to connect the
190-
DC/OS cluster only using an SSL connection, i.e. by using an
191-
`https://<dcos-url>`. Use the following command to set your DC/OS URL:
171+
In addition to the described configuration, make sure to connect the DC/OS cluster only using an SSL connection, i.e. by using an `https://<dcos-url>`. Use the following command to set your DC/OS URL:
192172
193173
$ dcos config set core.dcos_url https://<dcos-url>
194174

0 commit comments

Comments
 (0)