Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ Use application lifecycle to create your application and deliver hybrid apps acr
{{% /blocks/feature %}}

{{% blocks/feature icon="fa-cog" title="Configure, Secure, and Manage Your Resources" url="docs/getting-started/integration/policy-controllers" %}}
Policy and configuration management uses labels to help you deploy policies and control consistently across your resources. Keep your resources secure by using access control and manage for your quota and cost.
Policy and configuration management uses labels to help you deploy policies and control consistently across your resources. Keep your resources secure by using access control and manage your quota and cost.
{{% /blocks/feature %}}
{{% /blocks/section %}}

Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/multiplehubs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ toc_hide: true
---


The `MultipleHubs` is a new feature in Open Cluster Management (OCM) that allows you to configure a list of bootstrapkubeconfigs of multiple hubs. This feature is designed to provide a high availability (HA) solution of hub clusters. In this blog, we will introduce the MultipleHubs feature and how to use it.
The `MultipleHubs` is a new feature in Open Cluster Management (OCM) that allows you to configure a list of bootstrap kubeconfigs of multiple hubs. This feature is designed to provide a high availability (HA) solution of hub clusters. In this blog, we will introduce the MultipleHubs feature and how to use it.

The high availability of hub clusters means that if one hub cluster is down, the managed clusters can still communicate with other hub clusters. Users can also specify the hub cluster that the managed cluster should connect to by configuring the `ManagedCluster` resource.

Expand Down Expand Up @@ -57,7 +57,7 @@ The `hubConnectionTimeoutSeconds` is the timeout for the managed cluster to conn

Currently, the `MultipleHubs` feature only supports the `LocalSecrets` type of `bootstrapKubeConfigs`.

As we mentioned before, you can also specify the hub's connectivities in the `ManagedCluster` resource from the hub side. We using the `hubAcceptsClient` field in the `ManagedCluster` resource to specify whether the hub cluster accepts the managed cluster. The following is an example of the `ManagedCluster` resource:
As we mentioned before, you can also specify the hub's connectivity in the `ManagedCluster` resource from the hub side. We use the `hubAcceptsClient` field in the `ManagedCluster` resource to specify whether the hub cluster accepts the managed cluster. The following is an example of the `ManagedCluster` resource:

```yaml
apiVersion: cluster.open-cluster-management.io/v1
Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/concepts/add-on-extensibility/addon.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,15 @@ to help developers to develop an extension based on the foundation components
for the purpose of working with multiple clusters in custom cases. A typical
addon should consist of two kinds of components:

- __Addon Agent__: A kubernetes controller *in the managed cluster* that manages
- __Addon Agent__: A Kubernetes controller *in the managed cluster* that manages
the managed cluster for the hub admins. A typical addon agent is expected to
be working by subscribing the prescriptions (e.g. in forms of CustomResources)
from the hub cluster and then consistently reconcile the state of the managed
cluster like an ordinary kubernetes operator does.
cluster like an ordinary Kubernetes operator does.

- __Addon Manager__: A kubernetes controller *in the hub cluster* that applies
- __Addon Manager__: A Kubernetes controller *in the hub cluster* that applies
manifests to the managed clusters via the [ManifestWork]({{< ref "docs/concepts/work-distribution/manifestwork" >}})
api. In addition to resource dispatching, the manager can optionally manage
API. In addition to resource dispatching, the manager can optionally manage
the lifecycle of CSRs for the addon agents or even the RBAC permission bond
to the CSRs' requesting identity.

Expand Down
20 changes: 10 additions & 10 deletions content/en/docs/concepts/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ This page is an overview of open cluster management.
## Overview

__Open Cluster Management__ (OCM) is a powerful, modular, extensible platform
for Kubernetes multi-cluster orchestration. Learning from the past failing
lesson of building Kubernetes federation systems in the Kubernetes community,
for Kubernetes multi-cluster orchestration. Learning from the past failed
lessons of building Kubernetes federation systems in the Kubernetes community,
in OCM we will be jumping out of the legacy centric, imperative architecture of
[Kubefed v2](https://github.com/kubernetes-sigs/kubefed) and embracing the
"hub-agent" architecture which is identical to the original pattern of
Expand All @@ -25,10 +25,10 @@ the two models we will be frequently using throughout the world of OCM:
plane of OCM. Generally the hub cluster is supposed to be a light-weight
Kubernetes cluster hosting merely a few fundamental controllers and services.

* __Klusterlet__: Indicating the clusters that being managed by the hub
* __Klusterlet__: Indicating the clusters that are being managed by the hub
cluster. Klusterlet might also be called "managed cluster" or "spoke cluster". The
klusterlet is supposed to actively __pulling__ the latest prescriptions from
the hub cluster and consistently reconciles the physical Kubernetes cluster
klusterlet is supposed to actively __pull__ the latest prescriptions from
the hub cluster and consistently reconcile the physical Kubernetes cluster
to the expected state.

### "Hub-spoke" architecture
Expand All @@ -46,12 +46,12 @@ the managed clusters or be buried in sending requests against the clusters.
Imagine in a world where there's no kubelet in Kubernetes and its control plane
is directly operating the container daemons, it will be extremely hard for a
centric controller to manage a cluster of 5k+ nodes. Likewise, that's how OCM
trying to breach the bottleneck of scalability, by dividing and offloading the
tries to breach the bottleneck of scalability, by dividing and offloading the
execution into separated agents. So it's always feasible for a hub cluster to
accept and manage thousand-ish clusters.
accept and manage thousands of clusters.

Each klusterlet will be working independently and autonomously, so they have
a weak dependency to the availability of the hub cluster. If the hub goes down
a weak dependency on the availability of the hub cluster. If the hub goes down
(e.g. during maintenance or network partition) the klusterlet or other OCM
agents working in the managed cluster are supposed to keep actively managing
the hosting cluster until it re-connects. Additionally if the hub cluster and
Expand Down Expand Up @@ -112,7 +112,7 @@ out a registered cluster by denying the rotation of hub cluster's certificate,
on the other hand from the perspective of a managed cluster's admin, he can
either brutally deleting the agent instances or revoking the granted RBAC
permissions for the agents. Note that the hub controller will be automatically
preparing environment for the newly registered cluster and cleaning up neatly
preparing the environment for the newly registered cluster and cleaning up neatly
upon kicking a managed cluster.

<div style="text-align: center; padding: 20px;">
Expand Down Expand Up @@ -188,7 +188,7 @@ clusters via the labels or the cluster-claims. The placement module is
completely decoupled from the execution, the output from placement will
be merely a list of names of the matched clusters in the `PlacementDecision`
API, so the consumer controller of the decision output can reactively
discovery the topology or availability change from the managed clusters by
discover the topology or availability changes from the managed clusters by
simply list-watching the decision API.


Expand Down
2 changes: 1 addition & 1 deletion content/en/docs/concepts/cluster-inventory/clusterclaim.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ the status of the corresponding `ManagedCluster` object on the hub.

## Usage

`ClusterCaim` is used to specify additional properties of the managed cluster like
`ClusterClaim` is used to specify additional properties of the managed cluster like
the clusterID, version, vendor and cloud provider. We defined some reserved `ClusterClaims`
like `id.k8s.io` which is a unique identifier for the managed cluster.

Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/cluster-inventory/managedcluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,8 +43,8 @@ will be:
- `CertificateSigningRequest`'s "get", "list", "watch", "create", "update".
- `ManagedCluster`'s "get", "list", "create", "update"

Note that ideally the bootstrap kubeconfig is supposed to live shortly
(hour-ish) after signed by the hub cluster so that it won't be abused by
Note that ideally the bootstrap kubeconfig is supposed to live for a short time
(hour-ish) after being signed by the hub cluster so that it won't be abused by
unwelcome clients.

Last but not least, you can always live an easier life by leveraging OCM's
Expand All @@ -56,7 +56,7 @@ When we're registering a new cluster into OCM, the registration agent will be
starting by creating an unaccepted `ManagedCluster` into the hub cluster along
with a temporary [CertificateSigningRequest (CSR)](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/)
resource. The cluster will be accepted by the hub control plane, if the
following requirements is meet:
following requirements are met:

- The CSR is approved and signed by any certificate provider setting filling
`.status.certificate` with legit X.509 certificates.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The following picture shows the hierarchies of how the cluster set works:
<img src="/clusterset-explain.png" alt="Clusterset" style="margin: 0 auto; width: 90%">
</div>

## Operates ManagedClusterSet using clusteradm
## Operating ManagedClusterSet using clusteradm

### Creating a ManagedClusterSet

Expand Down Expand Up @@ -123,8 +123,8 @@ $ clusteradm get clustersets
└── <Status> 1 ManagedClusters selected
```

So far we successfully created a new cluster set containing 1 cluster and bind
it a "workspace namespace".
So far we successfully created a new cluster set containing 1 cluster and bound
it to a "workspace namespace".

## A glance at the "ManagedClusterSet" API

Expand Down
6 changes: 3 additions & 3 deletions content/en/docs/concepts/content-placement/placement.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,7 @@ The following example shows how to select clusters with prioritizers.
weight 3 and addon score cpuratio with weight 1. Go into the [Extensible scheduling](#extensible-scheduling) section
to learn more about addon score.

In `prioritizerPolicy` section, it includes the following fields:
The `prioritizerPolicy` section includes the following fields:
- `mode` is either `Exact`, `Additive` or `""`, where `""` is `Additive` by default.
- In `Additive` mode, any prioritizer not explicitly enumerated is enabled
in its default `Configurations`, in which `Steady` and `Balance` prioritizers have
Expand Down Expand Up @@ -615,10 +615,10 @@ Events:
Normal ScoreUpdate 3s placementController cluster1:200 cluster2:145 cluster3:189 cluster4:200
```
The placement controller will give a score to each filtered `ManagedCluster` and generate an event for it. When the cluster score
changes, a new event will generate. You can check the score of each cluster in the `Placment` events, to know why some clusters with lower score are not selected.
changes, a new event will be generated. You can check the score of each cluster in the `Placement` events, to know why some clusters with lower score are not selected.

### Debug
If you want to know more defails of how clusters are selected in each step, can following below step to access the debug endpoint.
If you want to know more details of how clusters are selected in each step, you can follow the steps below to access the debug endpoint.

Create clusterrole "debugger" to access debug path and bind this to anonymous user.

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/concepts/work-distribution/manifestwork.md
Original file line number Diff line number Diff line change
Expand Up @@ -588,7 +588,7 @@ The second `ManifestWork` only defines `replicas` in the manifest, so it takes t
first `ManifestWork` is updated to add `replicas` field with different value, it will get conflict condition and
manifest will not be updated by it.

Instead of create the second `ManifestWork`, user can also set HPA for this deployment. HPA will also take the ownership
Instead of creating the second `ManifestWork`, user can also set HPA for this deployment. HPA will also take the ownership
of `replicas`, and the update of `replicas` field in the first `ManifestWork` will return conflict condition.

## Ignore fields in Server Side Apply
Expand Down Expand Up @@ -660,7 +660,7 @@ there are several options:
- Option 3: Create role/clusterRole roleBinding/clusterRoleBinding for the `klusterlet-work-sa` service account;
(Deprecated since OCM version >= v0.12.0, use the Option 1 instead)

Below is an example use ManifestWork to give the work-agent permission for resource `machines.cluster.x-k8s.io`
Below is an example using ManifestWork to give the work-agent permission for resource `machines.cluster.x-k8s.io`

- Option 1: Use label `"open-cluster-management.io/aggregate-to-work": "true"` to aggregate the permission; Recommended

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ spec:
- date; echo Hello from the Kubernetes cluster
```
The **PlacementRefs** uses the Rollout Strategy [API](https://github.com/open-cluster-management-io/api/blob/main/cluster/v1alpha1/types_rolloutstrategy.go) to apply the manifestWork to the selected clusters.
In the example above; the placementRefs refers to three placements; placement-rollout-all, placement-rollout-progressive and placement-rollout-progressive-per-group. For more info regards the rollout strategies check the Rollout Strategy section at the [placement](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/blob/main/content/en/concepts/placement.md) document.
In the example above; the placementRefs refers to three placements; placement-rollout-all, placement-rollout-progressive and placement-rollout-progressive-per-group. For more info regarding the rollout strategies check the Rollout Strategy section at the [placement](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/blob/main/content/en/concepts/placement.md) document.
**Note:** The placement reference must be in the same namespace as the manifestWorkReplicaSet.

## Status tracking
Expand All @@ -105,7 +105,7 @@ The manifestWorkReplicaSet has three status conditions;
1. **PlacementRolledOut** verify the rollout strategy status; progressing or complete.
1. **ManifestWorkApplied** verify the created manifestWork status; applied, progressing, degraded or available.

The manifestWorkReplicaSet determine the ManifestWorkApplied condition status based on the resource state (applied or available) of each manifestWork.
The manifestWorkReplicaSet determines the ManifestWorkApplied condition status based on the resource state (applied or available) of each manifestWork.

Here is an example.

Expand Down Expand Up @@ -194,7 +194,7 @@ spec:
- feature: ManifestWorkReplicaSet
mode: Enable
```
In order to assure the ManifestWorkReplicaSet has been enabled successfully check the cluster-manager using the command below
In order to ensure the ManifestWorkReplicaSet has been enabled successfully, check the cluster-manager using the command below

```shell
$ oc get ClusterManager cluster-manager -o yml
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/developer-guides/addon.md
Original file line number Diff line number Diff line change
Expand Up @@ -332,7 +332,7 @@ We support 3 kinds of health prober types to monitor the healthiness of add-on a
3. **DeploymentAvailability**

`DeploymentAvailability` health prober indicates the healthiness of the add-on is connected to the availability of
the corresponding agent deployment resources on the managed cluster. It's applicable to those add-ons that running
the corresponding agent deployment resources on the managed cluster. It's applicable to those add-ons that run
`Deployment` type workload on the managed cluster. The add-on manager will check if the `readyReplicas` of the
add-on agent deployment is more than 1 to set the addon Status.

Expand All @@ -348,7 +348,7 @@ We support 3 kinds of health prober types to monitor the healthiness of add-on a

`WorkloadAvailability` health prober indicates the healthiness of the add-on is connected to the availability of
the corresponding agent workload resources(only `Deployment` and `DaemonSet` are supported for now) on the managed
cluster. It's applicable to those add-ons that running `Deployment` and/or `DeamonSet` workloads on the managed
cluster. It's applicable to those add-ons that run `Deployment` and/or `DaemonSet` workloads on the managed
cluster. The add-on manager will check if `readyReplicas > 1` for each `Deployment` and
`NumberReady == DesiredNumberScheduled` for each `DaemonSet` of the add-on agent to set the addon Status.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Feature gates follow a standard lifecycle:
|--------------|---------|-------|-------------|
| `DefaultClusterSet` | `true` | Alpha | When it is enabled, it will make registration hub controller to maintain a default clusterset and a global clusterset. Adds clusters without cluster set labels to the default cluster set. All clusters will be included to the global clusterset.|
| `V1beta1CSRAPICompatibility` | `false` | Alpha | When it is enabled, it will make the spoke registration agent to issue CSR requests via V1beta1 api.|
| `ManagedClusterAutoApproval` | `false` | Alpha | When it is enabled, it will approve a managed cluster registraion request automatically. |
| `ManagedClusterAutoApproval` | `false` | Alpha | When it is enabled, it will approve a managed cluster registration request automatically. |
| `ResourceCleanup` | `true` | Beta | When it is enabled, it will start gc controller to clean up resources in cluster ns after cluster is deleted. |
| `ClusterProfile` | `false` | Alpha | When it is enabled, it will start new controller in the Hub that can be used to sync ManagedCluster to ClusterProfile.|
| `ClusterImporter` | `false` | Alpha | When it is enabled, it will enable the auto import of managed cluster for certain cluster providers, e.g. cluster-api.|
Expand Down Expand Up @@ -56,7 +56,7 @@ Feature gates follow a standard lifecycle:

| Feature Gate | Default | Stage | Description |
|--------------|---------|-------|-------------|
| `ExecutorValidatingCaches` | `false` | Alpha | When it is enabled, it will start a new controller in the wokrk agent to cache subject access review validating results for executors.|
| `ExecutorValidatingCaches` | `false` | Alpha | When it is enabled, it will start a new controller in the work agent to cache subject access review validating results for executors.|
| `RawFeedbackJsonString` | `false` | Alpha | When it is enabled, it will make the work agent to return the feedback result as a json string if the result is not a scalar value.|

### Addon Management Features
Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/getting-started/administration/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In this page, we provide a way to monitor your OCM environment using Prometheus-

## Before you get started

You must have a OCM environment setuped. You can also follow our recommended [quick start guide]({{< ref "docs/getting-started/quick-start" >}}) to set up a playgroud OCM environment.
You must have an OCM environment set up. You can also follow our recommended [quick start guide]({{< ref "docs/getting-started/quick-start" >}}) to set up a playground OCM environment.

And then please [install the Prometheus-Operator](https://prometheus-operator.dev/docs/prologue/quick-start/) in your hub cluster. You can also run the following commands copied from the official doc:

Expand Down Expand Up @@ -50,7 +50,7 @@ rate(apiserver_request_total{resource=~"managedclusters|managedclusteraddons|man

## Visualized with Grafana

We provide a intial grafana dashboard for you to visualize the metrics. But you can also customize your own dashboard.
We provide an initial grafana dashboard for you to visualize the metrics. But you can also customize your own dashboard.

First, use the following command to proxy grafana service:

Expand Down
4 changes: 2 additions & 2 deletions content/en/docs/getting-started/administration/upgrading.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ exactly the OCM's overall release version.

#### Upgrading the core components

After the upgrading of registration-operator is done, it's about time to surge
After the upgrading of registration-operator is done, it's time to upgrade
the working modules of OCM. Go on and edit the `clustermanager` custom resource
to prescribe the registration-operator to perform the automated upgrading:

Expand Down Expand Up @@ -196,4 +196,4 @@ availability in case any of the managed clusters are running into failure:
$ kubectl get managedclusters
```

And the upgrading is all set if all the steps above is succeeded.
And the upgrading is all set if all the steps above have succeeded.
Loading