Skip to content

Commit 3749047

Browse files
committed
format issues
1 parent 173cb46 commit 3749047

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

admin-training/containers/01-docker-swarm-introduction.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,15 +24,15 @@ This guide assumes that:
2424
2. The sNow! server will also provide access to share file system via NFS (/home and /sNow). Check the [sNow! documentation](https://hpcnow.github.io/snow-documentation) in order to integrate other cluster file systems like BeeGFS, Lustre or IBM Spectrum Scale.
2525

2626
## Installation
27-
Docker Swarm manager nodes implement the *Raft Consensus Algorithm* to manage the global cluster state.
27+
Docker Swarm manager nodes implement the **Raft Consensus Algorithm** to manage the global cluster state.
2828
This is key for managing and scheduling tasks in the cluster, and also storing the same consistent state.
2929

30-
Raft *tolerates* up to *(N-1)/2 failures* and requires a majority or quorum of (N/2)+1 members to agree on values proposed to the cluster. This means that the size of the cluster should be at least 3 to resist one node failure or 5 to resist 3 nodes failures.
30+
Raft **tolerates** up to **(N-1)/2 failures** and requires a majority or quorum of (N/2)+1 members to agree on values proposed to the cluster. This means that the size of the cluster should be at least 3 to resist one node failure or 5 to resist 3 nodes failures.
3131

3232
This hands-on assumes that you have already deployed three VMs (domains) dedicated for Docker Swarm cluster or three compute nodes (production solution).
3333

3434
By default manager nodes also act as a worker nodes. For a small systems or non-critical services, this is relatively low-risk.
35-
However, because manager nodes use the Raft consensus algorithm to replicate data in a consistent way, they are sensitive to resource starvation. In sNow! environment you can *isolate managers in VMs without running other services* and deploy few bare metal nodes as Docker Swarm workers. In order to do so, you can drain manager nodes to make them unavailable as worker nodes:
35+
However, because manager nodes use the Raft consensus algorithm to replicate data in a consistent way, they are sensitive to resource starvation. In sNow! environment you can **isolate managers in VMs without running other services** and deploy few bare metal nodes as Docker Swarm workers. In order to do so, you can drain manager nodes to make them unavailable as worker nodes:
3636
```
3737
docker node update --availability drain <NODEID>
3838
```

0 commit comments

Comments
 (0)