diff --git a/content/en/_index.md b/content/en/_index.md
index e16550860..009fc0ad2 100644
--- a/content/en/_index.md
+++ b/content/en/_index.md
@@ -44,7 +44,7 @@ Use application lifecycle to create your application and deliver hybrid apps acr
{{% /blocks/feature %}}
{{% blocks/feature icon="fa-cog" title="Configure, Secure, and Manage Your Resources" url="docs/getting-started/integration/policy-controllers" %}}
-Policy and configuration management uses labels to help you deploy policies and control consistently across your resources. Keep your resources secure by using access control and manage for your quota and cost.
+Policy and configuration management uses labels to help you deploy policies and control consistently across your resources. Keep your resources secure by using access control and manage your quota and cost.
{{% /blocks/feature %}}
{{% /blocks/section %}}
diff --git a/content/en/blog/multiplehubs/index.md b/content/en/blog/multiplehubs/index.md
index f2bd35376..1b8851134 100644
--- a/content/en/blog/multiplehubs/index.md
+++ b/content/en/blog/multiplehubs/index.md
@@ -6,7 +6,7 @@ toc_hide: true
---
-The `MultipleHubs` is a new feature in Open Cluster Management (OCM) that allows you to configure a list of bootstrapkubeconfigs of multiple hubs. This feature is designed to provide a high availability (HA) solution of hub clusters. In this blog, we will introduce the MultipleHubs feature and how to use it.
+The `MultipleHubs` is a new feature in Open Cluster Management (OCM) that allows you to configure a list of bootstrap kubeconfigs of multiple hubs. This feature is designed to provide a high availability (HA) solution of hub clusters. In this blog, we will introduce the MultipleHubs feature and how to use it.
The high availability of hub clusters means that if one hub cluster is down, the managed clusters can still communicate with other hub clusters. Users can also specify the hub cluster that the managed cluster should connect to by configuring the `ManagedCluster` resource.
@@ -57,7 +57,7 @@ The `hubConnectionTimeoutSeconds` is the timeout for the managed cluster to conn
Currently, the `MultipleHubs` feature only supports the `LocalSecrets` type of `bootstrapKubeConfigs`.
-As we mentioned before, you can also specify the hub's connectivities in the `ManagedCluster` resource from the hub side. We using the `hubAcceptsClient` field in the `ManagedCluster` resource to specify whether the hub cluster accepts the managed cluster. The following is an example of the `ManagedCluster` resource:
+As we mentioned before, you can also specify the hub's connectivity in the `ManagedCluster` resource from the hub side. We use the `hubAcceptsClient` field in the `ManagedCluster` resource to specify whether the hub cluster accepts the managed cluster. The following is an example of the `ManagedCluster` resource:
```yaml
apiVersion: cluster.open-cluster-management.io/v1
diff --git a/content/en/docs/concepts/add-on-extensibility/addon.md b/content/en/docs/concepts/add-on-extensibility/addon.md
index 56bfa9e50..9f0dda3ab 100644
--- a/content/en/docs/concepts/add-on-extensibility/addon.md
+++ b/content/en/docs/concepts/add-on-extensibility/addon.md
@@ -15,15 +15,15 @@ to help developers to develop an extension based on the foundation components
for the purpose of working with multiple clusters in custom cases. A typical
addon should consist of two kinds of components:
-- __Addon Agent__: A kubernetes controller *in the managed cluster* that manages
+- __Addon Agent__: A Kubernetes controller *in the managed cluster* that manages
the managed cluster for the hub admins. A typical addon agent is expected to
be working by subscribing the prescriptions (e.g. in forms of CustomResources)
from the hub cluster and then consistently reconcile the state of the managed
- cluster like an ordinary kubernetes operator does.
+ cluster like an ordinary Kubernetes operator does.
-- __Addon Manager__: A kubernetes controller *in the hub cluster* that applies
+- __Addon Manager__: A Kubernetes controller *in the hub cluster* that applies
manifests to the managed clusters via the [ManifestWork]({{< ref "docs/concepts/work-distribution/manifestwork" >}})
- api. In addition to resource dispatching, the manager can optionally manage
+ API. In addition to resource dispatching, the manager can optionally manage
the lifecycle of CSRs for the addon agents or even the RBAC permission bond
to the CSRs' requesting identity.
diff --git a/content/en/docs/concepts/architecture.md b/content/en/docs/concepts/architecture.md
index eb7acfe0e..5d21de4dc 100644
--- a/content/en/docs/concepts/architecture.md
+++ b/content/en/docs/concepts/architecture.md
@@ -10,8 +10,8 @@ This page is an overview of open cluster management.
## Overview
__Open Cluster Management__ (OCM) is a powerful, modular, extensible platform
-for Kubernetes multi-cluster orchestration. Learning from the past failing
-lesson of building Kubernetes federation systems in the Kubernetes community,
+for Kubernetes multi-cluster orchestration. Learning from the past failed
+lessons of building Kubernetes federation systems in the Kubernetes community,
in OCM we will be jumping out of the legacy centric, imperative architecture of
[Kubefed v2](https://github.com/kubernetes-sigs/kubefed) and embracing the
"hub-agent" architecture which is identical to the original pattern of
@@ -25,10 +25,10 @@ the two models we will be frequently using throughout the world of OCM:
plane of OCM. Generally the hub cluster is supposed to be a light-weight
Kubernetes cluster hosting merely a few fundamental controllers and services.
-* __Klusterlet__: Indicating the clusters that being managed by the hub
+* __Klusterlet__: Indicating the clusters that are being managed by the hub
cluster. Klusterlet might also be called "managed cluster" or "spoke cluster". The
- klusterlet is supposed to actively __pulling__ the latest prescriptions from
- the hub cluster and consistently reconciles the physical Kubernetes cluster
+ klusterlet is supposed to actively __pull__ the latest prescriptions from
+ the hub cluster and consistently reconcile the physical Kubernetes cluster
to the expected state.
### "Hub-spoke" architecture
@@ -46,12 +46,12 @@ the managed clusters or be buried in sending requests against the clusters.
Imagine in a world where there's no kubelet in Kubernetes and its control plane
is directly operating the container daemons, it will be extremely hard for a
centric controller to manage a cluster of 5k+ nodes. Likewise, that's how OCM
-trying to breach the bottleneck of scalability, by dividing and offloading the
+tries to breach the bottleneck of scalability, by dividing and offloading the
execution into separated agents. So it's always feasible for a hub cluster to
-accept and manage thousand-ish clusters.
+accept and manage thousands of clusters.
Each klusterlet will be working independently and autonomously, so they have
-a weak dependency to the availability of the hub cluster. If the hub goes down
+a weak dependency on the availability of the hub cluster. If the hub goes down
(e.g. during maintenance or network partition) the klusterlet or other OCM
agents working in the managed cluster are supposed to keep actively managing
the hosting cluster until it re-connects. Additionally if the hub cluster and
@@ -112,7 +112,7 @@ out a registered cluster by denying the rotation of hub cluster's certificate,
on the other hand from the perspective of a managed cluster's admin, he can
either brutally deleting the agent instances or revoking the granted RBAC
permissions for the agents. Note that the hub controller will be automatically
-preparing environment for the newly registered cluster and cleaning up neatly
+preparing the environment for the newly registered cluster and cleaning up neatly
upon kicking a managed cluster.
@@ -188,7 +188,7 @@ clusters via the labels or the cluster-claims. The placement module is
completely decoupled from the execution, the output from placement will
be merely a list of names of the matched clusters in the `PlacementDecision`
API, so the consumer controller of the decision output can reactively
-discovery the topology or availability change from the managed clusters by
+discover the topology or availability changes from the managed clusters by
simply list-watching the decision API.
diff --git a/content/en/docs/concepts/cluster-inventory/clusterclaim.md b/content/en/docs/concepts/cluster-inventory/clusterclaim.md
index 8a5f842e5..847ba7604 100644
--- a/content/en/docs/concepts/cluster-inventory/clusterclaim.md
+++ b/content/en/docs/concepts/cluster-inventory/clusterclaim.md
@@ -16,7 +16,7 @@ the status of the corresponding `ManagedCluster` object on the hub.
## Usage
-`ClusterCaim` is used to specify additional properties of the managed cluster like
+`ClusterClaim` is used to specify additional properties of the managed cluster like
the clusterID, version, vendor and cloud provider. We defined some reserved `ClusterClaims`
like `id.k8s.io` which is a unique identifier for the managed cluster.
diff --git a/content/en/docs/concepts/cluster-inventory/managedcluster.md b/content/en/docs/concepts/cluster-inventory/managedcluster.md
index 35e98060f..ecd7e0378 100644
--- a/content/en/docs/concepts/cluster-inventory/managedcluster.md
+++ b/content/en/docs/concepts/cluster-inventory/managedcluster.md
@@ -43,8 +43,8 @@ will be:
- `CertificateSigningRequest`'s "get", "list", "watch", "create", "update".
- `ManagedCluster`'s "get", "list", "create", "update"
-Note that ideally the bootstrap kubeconfig is supposed to live shortly
-(hour-ish) after signed by the hub cluster so that it won't be abused by
+Note that ideally the bootstrap kubeconfig is supposed to live for a short time
+(hour-ish) after being signed by the hub cluster so that it won't be abused by
unwelcome clients.
Last but not least, you can always live an easier life by leveraging OCM's
@@ -56,7 +56,7 @@ When we're registering a new cluster into OCM, the registration agent will be
starting by creating an unaccepted `ManagedCluster` into the hub cluster along
with a temporary [CertificateSigningRequest (CSR)](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/)
resource. The cluster will be accepted by the hub control plane, if the
-following requirements is meet:
+following requirements are met:
- The CSR is approved and signed by any certificate provider setting filling
`.status.certificate` with legit X.509 certificates.
diff --git a/content/en/docs/concepts/cluster-inventory/managedclusterset.md b/content/en/docs/concepts/cluster-inventory/managedclusterset.md
index dd97b6731..13415d22d 100644
--- a/content/en/docs/concepts/cluster-inventory/managedclusterset.md
+++ b/content/en/docs/concepts/cluster-inventory/managedclusterset.md
@@ -48,7 +48,7 @@ The following picture shows the hierarchies of how the cluster set works:
-## Operates ManagedClusterSet using clusteradm
+## Operating ManagedClusterSet using clusteradm
### Creating a ManagedClusterSet
@@ -123,8 +123,8 @@ $ clusteradm get clustersets
└── 1 ManagedClusters selected
```
-So far we successfully created a new cluster set containing 1 cluster and bind
-it a "workspace namespace".
+So far we successfully created a new cluster set containing 1 cluster and bound
+it to a "workspace namespace".
## A glance at the "ManagedClusterSet" API
diff --git a/content/en/docs/concepts/content-placement/placement.md b/content/en/docs/concepts/content-placement/placement.md
index 53cb04478..b962d2e33 100644
--- a/content/en/docs/concepts/content-placement/placement.md
+++ b/content/en/docs/concepts/content-placement/placement.md
@@ -329,7 +329,7 @@ The following example shows how to select clusters with prioritizers.
weight 3 and addon score cpuratio with weight 1. Go into the [Extensible scheduling](#extensible-scheduling) section
to learn more about addon score.
-In `prioritizerPolicy` section, it includes the following fields:
+The `prioritizerPolicy` section includes the following fields:
- `mode` is either `Exact`, `Additive` or `""`, where `""` is `Additive` by default.
- In `Additive` mode, any prioritizer not explicitly enumerated is enabled
in its default `Configurations`, in which `Steady` and `Balance` prioritizers have
@@ -615,10 +615,10 @@ Events:
Normal ScoreUpdate 3s placementController cluster1:200 cluster2:145 cluster3:189 cluster4:200
```
The placement controller will give a score to each filtered `ManagedCluster` and generate an event for it. When the cluster score
-changes, a new event will generate. You can check the score of each cluster in the `Placment` events, to know why some clusters with lower score are not selected.
+changes, a new event will be generated. You can check the score of each cluster in the `Placement` events, to know why some clusters with lower score are not selected.
### Debug
-If you want to know more defails of how clusters are selected in each step, can following below step to access the debug endpoint.
+If you want to know more details of how clusters are selected in each step, you can follow the steps below to access the debug endpoint.
Create clusterrole "debugger" to access debug path and bind this to anonymous user.
diff --git a/content/en/docs/concepts/work-distribution/manifestwork.md b/content/en/docs/concepts/work-distribution/manifestwork.md
index 3de3eab79..74725ff02 100644
--- a/content/en/docs/concepts/work-distribution/manifestwork.md
+++ b/content/en/docs/concepts/work-distribution/manifestwork.md
@@ -588,7 +588,7 @@ The second `ManifestWork` only defines `replicas` in the manifest, so it takes t
first `ManifestWork` is updated to add `replicas` field with different value, it will get conflict condition and
manifest will not be updated by it.
-Instead of create the second `ManifestWork`, user can also set HPA for this deployment. HPA will also take the ownership
+Instead of creating the second `ManifestWork`, user can also set HPA for this deployment. HPA will also take the ownership
of `replicas`, and the update of `replicas` field in the first `ManifestWork` will return conflict condition.
## Ignore fields in Server Side Apply
@@ -660,7 +660,7 @@ there are several options:
- Option 3: Create role/clusterRole roleBinding/clusterRoleBinding for the `klusterlet-work-sa` service account;
(Deprecated since OCM version >= v0.12.0, use the Option 1 instead)
-Below is an example use ManifestWork to give the work-agent permission for resource `machines.cluster.x-k8s.io`
+Below is an example using ManifestWork to give the work-agent permission for resource `machines.cluster.x-k8s.io`
- Option 1: Use label `"open-cluster-management.io/aggregate-to-work": "true"` to aggregate the permission; Recommended
diff --git a/content/en/docs/concepts/work-distribution/manifestworkreplicaset.md b/content/en/docs/concepts/work-distribution/manifestworkreplicaset.md
index bcd80be40..326f5fa05 100644
--- a/content/en/docs/concepts/work-distribution/manifestworkreplicaset.md
+++ b/content/en/docs/concepts/work-distribution/manifestworkreplicaset.md
@@ -92,7 +92,7 @@ spec:
- date; echo Hello from the Kubernetes cluster
```
The **PlacementRefs** uses the Rollout Strategy [API](https://github.com/open-cluster-management-io/api/blob/main/cluster/v1alpha1/types_rolloutstrategy.go) to apply the manifestWork to the selected clusters.
-In the example above; the placementRefs refers to three placements; placement-rollout-all, placement-rollout-progressive and placement-rollout-progressive-per-group. For more info regards the rollout strategies check the Rollout Strategy section at the [placement](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/blob/main/content/en/concepts/placement.md) document.
+In the example above; the placementRefs refers to three placements; placement-rollout-all, placement-rollout-progressive and placement-rollout-progressive-per-group. For more info regarding the rollout strategies check the Rollout Strategy section at the [placement](https://github.com/open-cluster-management-io/open-cluster-management-io.github.io/blob/main/content/en/concepts/placement.md) document.
**Note:** The placement reference must be in the same namespace as the manifestWorkReplicaSet.
## Status tracking
@@ -105,7 +105,7 @@ The manifestWorkReplicaSet has three status conditions;
1. **PlacementRolledOut** verify the rollout strategy status; progressing or complete.
1. **ManifestWorkApplied** verify the created manifestWork status; applied, progressing, degraded or available.
-The manifestWorkReplicaSet determine the ManifestWorkApplied condition status based on the resource state (applied or available) of each manifestWork.
+The manifestWorkReplicaSet determines the ManifestWorkApplied condition status based on the resource state (applied or available) of each manifestWork.
Here is an example.
@@ -194,7 +194,7 @@ spec:
- feature: ManifestWorkReplicaSet
mode: Enable
```
-In order to assure the ManifestWorkReplicaSet has been enabled successfully check the cluster-manager using the command below
+In order to ensure the ManifestWorkReplicaSet has been enabled successfully, check the cluster-manager using the command below
```shell
$ oc get ClusterManager cluster-manager -o yml
diff --git a/content/en/docs/developer-guides/addon.md b/content/en/docs/developer-guides/addon.md
index fb04d40fe..685a7c312 100644
--- a/content/en/docs/developer-guides/addon.md
+++ b/content/en/docs/developer-guides/addon.md
@@ -332,7 +332,7 @@ We support 3 kinds of health prober types to monitor the healthiness of add-on a
3. **DeploymentAvailability**
`DeploymentAvailability` health prober indicates the healthiness of the add-on is connected to the availability of
- the corresponding agent deployment resources on the managed cluster. It's applicable to those add-ons that running
+ the corresponding agent deployment resources on the managed cluster. It's applicable to those add-ons that run
`Deployment` type workload on the managed cluster. The add-on manager will check if the `readyReplicas` of the
add-on agent deployment is more than 1 to set the addon Status.
@@ -348,7 +348,7 @@ We support 3 kinds of health prober types to monitor the healthiness of add-on a
`WorkloadAvailability` health prober indicates the healthiness of the add-on is connected to the availability of
the corresponding agent workload resources(only `Deployment` and `DaemonSet` are supported for now) on the managed
- cluster. It's applicable to those add-ons that running `Deployment` and/or `DeamonSet` workloads on the managed
+ cluster. It's applicable to those add-ons that run `Deployment` and/or `DaemonSet` workloads on the managed
cluster. The add-on manager will check if `readyReplicas > 1` for each `Deployment` and
`NumberReady == DesiredNumberScheduled` for each `DaemonSet` of the add-on agent to set the addon Status.
diff --git a/content/en/docs/getting-started/administration/featuregates.md b/content/en/docs/getting-started/administration/featuregates.md
index cf4dd5532..417fe4fe0 100644
--- a/content/en/docs/getting-started/administration/featuregates.md
+++ b/content/en/docs/getting-started/administration/featuregates.md
@@ -27,7 +27,7 @@ Feature gates follow a standard lifecycle:
|--------------|---------|-------|-------------|
| `DefaultClusterSet` | `true` | Alpha | When it is enabled, it will make registration hub controller to maintain a default clusterset and a global clusterset. Adds clusters without cluster set labels to the default cluster set. All clusters will be included to the global clusterset.|
| `V1beta1CSRAPICompatibility` | `false` | Alpha | When it is enabled, it will make the spoke registration agent to issue CSR requests via V1beta1 api.|
-| `ManagedClusterAutoApproval` | `false` | Alpha | When it is enabled, it will approve a managed cluster registraion request automatically. |
+| `ManagedClusterAutoApproval` | `false` | Alpha | When it is enabled, it will approve a managed cluster registration request automatically. |
| `ResourceCleanup` | `true` | Beta | When it is enabled, it will start gc controller to clean up resources in cluster ns after cluster is deleted. |
| `ClusterProfile` | `false` | Alpha | When it is enabled, it will start new controller in the Hub that can be used to sync ManagedCluster to ClusterProfile.|
| `ClusterImporter` | `false` | Alpha | When it is enabled, it will enable the auto import of managed cluster for certain cluster providers, e.g. cluster-api.|
@@ -56,7 +56,7 @@ Feature gates follow a standard lifecycle:
| Feature Gate | Default | Stage | Description |
|--------------|---------|-------|-------------|
-| `ExecutorValidatingCaches` | `false` | Alpha | When it is enabled, it will start a new controller in the wokrk agent to cache subject access review validating results for executors.|
+| `ExecutorValidatingCaches` | `false` | Alpha | When it is enabled, it will start a new controller in the work agent to cache subject access review validating results for executors.|
| `RawFeedbackJsonString` | `false` | Alpha | When it is enabled, it will make the work agent to return the feedback result as a json string if the result is not a scalar value.|
### Addon Management Features
diff --git a/content/en/docs/getting-started/administration/monitoring.md b/content/en/docs/getting-started/administration/monitoring.md
index 1c06fdefe..59782b9dd 100644
--- a/content/en/docs/getting-started/administration/monitoring.md
+++ b/content/en/docs/getting-started/administration/monitoring.md
@@ -9,7 +9,7 @@ In this page, we provide a way to monitor your OCM environment using Prometheus-
## Before you get started
-You must have a OCM environment setuped. You can also follow our recommended [quick start guide]({{< ref "docs/getting-started/quick-start" >}}) to set up a playgroud OCM environment.
+You must have an OCM environment set up. You can also follow our recommended [quick start guide]({{< ref "docs/getting-started/quick-start" >}}) to set up a playground OCM environment.
And then please [install the Prometheus-Operator](https://prometheus-operator.dev/docs/prologue/quick-start/) in your hub cluster. You can also run the following commands copied from the official doc:
@@ -50,7 +50,7 @@ rate(apiserver_request_total{resource=~"managedclusters|managedclusteraddons|man
## Visualized with Grafana
-We provide a intial grafana dashboard for you to visualize the metrics. But you can also customize your own dashboard.
+We provide an initial grafana dashboard for you to visualize the metrics. But you can also customize your own dashboard.
First, use the following command to proxy grafana service:
diff --git a/content/en/docs/getting-started/administration/upgrading.md b/content/en/docs/getting-started/administration/upgrading.md
index 55548fdaf..27e915742 100644
--- a/content/en/docs/getting-started/administration/upgrading.md
+++ b/content/en/docs/getting-started/administration/upgrading.md
@@ -116,7 +116,7 @@ exactly the OCM's overall release version.
#### Upgrading the core components
-After the upgrading of registration-operator is done, it's about time to surge
+After the upgrading of registration-operator is done, it's time to upgrade
the working modules of OCM. Go on and edit the `clustermanager` custom resource
to prescribe the registration-operator to perform the automated upgrading:
@@ -196,4 +196,4 @@ availability in case any of the managed clusters are running into failure:
$ kubectl get managedclusters
```
-And the upgrading is all set if all the steps above is succeeded.
+And the upgrading is all set if all the steps above have succeeded.
diff --git a/content/en/docs/getting-started/installation/register-a-cluster.md b/content/en/docs/getting-started/installation/register-a-cluster.md
index 49f989fc4..8bc9f29b9 100644
--- a/content/en/docs/getting-started/installation/register-a-cluster.md
+++ b/content/en/docs/getting-started/installation/register-a-cluster.md
@@ -7,7 +7,7 @@ After the cluster manager is installed on the hub cluster, you need to install t
-## Prerequisite
+## Prerequisites
- The managed clusters should be `v1.11+`.
- Ensure [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) and [kustomize](https://kubectl.docs.kubernetes.io/installation/kustomize/) are installed.
@@ -123,16 +123,16 @@ Available resource configuration flags:
- `--resource-limits`: Specifies resource limits as key-value pairs (e.g., `cpu=800m,memory=800Mi`)
- `--resource-requests`: Specifies resource requests as key-value pairs (e.g., `cpu=500m,memory=500Mi`)
-### Bootstrap a klusterlet in hosted mode(Optional)
+### Bootstrap a klusterlet in hosted mode (Optional)
Using the above command, the klusterlet components(registration-agent and work-agent) will be deployed on the managed
cluster, it is mandatory to expose the hub cluster to the managed cluster. We provide an option for running the
klusterlet components outside the managed cluster, for example, on the hub cluster(hosted mode).
-The hosted mode deploying is till in experimental stage, consider to use it only when:
+The hosted mode deployment is still in experimental stage, consider using it only when:
-- want to reduce the footprints of the managed cluster.
-- do not want to expose the hub cluster to the managed cluster directly
+- you want to reduce the footprint of the managed cluster.
+- you do not want to expose the hub cluster to the managed cluster directly
In hosted mode, the cluster where the klusterlet is running is called the hosting cluster. Running the following command
to the hosting cluster to register the managed cluster to the hub.
@@ -453,7 +453,7 @@ A manifestWork that applies a CRD and operator should be deleted after a manifes
The `ResourceCleanup` featureGate for cluster registration on the Hub cluster enables automatic cleanup of managedClusterAddons and manifestWorks within the cluster namespace after cluster unjoining.
**Version Compatibility:**
-- The `ResourceCleanup` featureGate was introdueced in OCM v0.13.0, and was **disabled by default** in OCM v0.16.0 and earlier versions. To activate it, need to modify the clusterManager CR configuration:
+- The `ResourceCleanup` featureGate was introduced in OCM v0.13.0, and was **disabled by default** in OCM v0.16.0 and earlier versions. To activate it, you need to modify the clusterManager CR configuration:
```yaml
registrationConfiguration:
featureGates:
diff --git a/content/en/docs/getting-started/installation/start-the-control-plane.md b/content/en/docs/getting-started/installation/start-the-control-plane.md
index c41a2fc1b..64c780a05 100644
--- a/content/en/docs/getting-started/installation/start-the-control-plane.md
+++ b/content/en/docs/getting-started/installation/start-the-control-plane.md
@@ -5,7 +5,7 @@ weight: 1
-## Prerequisite
+## Prerequisites
- The hub cluster should be `v1.19+`.
(To run on hub cluster version between \[`v1.16`, `v1.18`\],
diff --git a/content/en/docs/getting-started/integration/app-lifecycle.md b/content/en/docs/getting-started/integration/app-lifecycle.md
index 6014a0f34..bdc479875 100644
--- a/content/en/docs/getting-started/integration/app-lifecycle.md
+++ b/content/en/docs/getting-started/integration/app-lifecycle.md
@@ -28,11 +28,11 @@ where managed clusters pull and apply application configurations.
For more details, visit the
[Argo CD Pull Integration GitHub page](https://github.com/open-cluster-management-io/argocd-pull-integration).
-## Prerequisite
+## Prerequisites
You must meet the following prerequisites to install the application lifecycle management add-on:
-- Ensure [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) are installed.
+- Ensure [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) is installed.
- Ensure the OCM _cluster manager_ is installed. See [Start the control plane]({{< ref "docs/getting-started/installation/start-the-control-plane" >}}) for more information.
diff --git a/content/en/docs/getting-started/quick-start/_index.md b/content/en/docs/getting-started/quick-start/_index.md
index 9f8a9008b..8f3e8a529 100644
--- a/content/en/docs/getting-started/quick-start/_index.md
+++ b/content/en/docs/getting-started/quick-start/_index.md
@@ -22,13 +22,13 @@ curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/
## Setup hub and managed cluster
-Run the following command to quickly setup a hub cluster and 2 managed clusters by kind.
+Run the following command to quickly set up a hub cluster and 2 managed clusters using kind.
```shell
curl -L https://raw.githubusercontent.com/open-cluster-management-io/OCM/main/solutions/setup-dev-environment/local-up.sh | bash
```
-If you want to setup OCM in a production environment or on a different kubernetes distribution, please refer to the [Start the control plane]({{< ref "docs/getting-started/installation/start-the-control-plane" >}}) and [Register a cluster]({{< ref "docs/getting-started/installation/register-a-cluster" >}}) guides.
+If you want to set up OCM in a production environment or on a different Kubernetes distribution, please refer to the [Start the control plane]({{< ref "docs/getting-started/installation/start-the-control-plane" >}}) and [Register a cluster]({{< ref "docs/getting-started/installation/register-a-cluster" >}}) guides.
Alternatively, you can [deploy OCM declaratively using the FleetConfig Controller]({{< ref "docs/getting-started/integration/fleetconfig-controller" >}}).
diff --git a/content/en/docs/release/_index.md b/content/en/docs/release/_index.md
index 28b76628f..f3f9018cf 100644
--- a/content/en/docs/release/_index.md
+++ b/content/en/docs/release/_index.md
@@ -82,7 +82,7 @@ Stay connected and happy managing clusters!
---
## `0.16.0`, 16 March 2025
-The Open Cluster Management team is exicted to announce the release of OCM v0.16.0 with many new
+The Open Cluster Management team is excited to announce the release of OCM v0.16.0 with many new
features:
Breaking Changes:
diff --git a/content/en/docs/scenarios/deploy-kubernetes-resources.md b/content/en/docs/scenarios/deploy-kubernetes-resources.md
index cc5ce6ef4..8b90cc5e7 100644
--- a/content/en/docs/scenarios/deploy-kubernetes-resources.md
+++ b/content/en/docs/scenarios/deploy-kubernetes-resources.md
@@ -9,13 +9,13 @@ your managed clusters with OCM's `ManifestWork` API.
## Prerequisites
-Before we get start with the following tutorial, let's clarify a few terms
+Before we get started with the following tutorial, let's clarify a few terms
we're going to use in the context.
- __Cluster namespace__: After a managed cluster is successfully registered
- into the hub. The hub registration controller will be automatically
+ into the hub, the hub registration controller will be automatically
provisioning a `cluster namespace` dedicated for the cluster of which the
- name will be same as the managed cluster. The `cluster namespace` is used
+ name will be the same as the managed cluster. The `cluster namespace` is used
for storing any custom resources/configurations that effectively belongs
to the managed cluster.
@@ -288,7 +288,7 @@ is also protected by another finalizer named:
- "cluster.open-cluster-management.io/applied-manifest-work-cleanup"
This finalizer is supposed to be detached after the deployed local resources
-are *completely* removed from the manged cluster. With that being said, if any
+are *completely* removed from the managed cluster. With that being said, if any
deployed local resources are holding at the "Terminating" due to graceful
deletion. Both of its `ManifestWork` and `AppliedManifestWork` should stay
undeleted.
diff --git a/content/en/docs/scenarios/distribute-workload-with-placement.md b/content/en/docs/scenarios/distribute-workload-with-placement.md
index cb7d14498..20e1598ce 100644
--- a/content/en/docs/scenarios/distribute-workload-with-placement.md
+++ b/content/en/docs/scenarios/distribute-workload-with-placement.md
@@ -31,7 +31,7 @@ And in this article, we want to show you how to use `clusteradm` to deploy
## Prerequisites
-Before starting with the following steps, suggest you understand the content below.
+Before starting with the following steps, we suggest you understand the content below.
- [__Placement__]({{< ref "docs/concepts/content-placement/placement" >}}):
The `Placement` API is used to dynamically select a set of [`ManagedCluster`]({{< ref "docs/concepts/cluster-inventory/managedcluster" >}})
diff --git a/content/en/docs/scenarios/manage-cluster-with-multiple-hubs.md b/content/en/docs/scenarios/manage-cluster-with-multiple-hubs.md
index 25bf7d9cd..580919c47 100644
--- a/content/en/docs/scenarios/manage-cluster-with-multiple-hubs.md
+++ b/content/en/docs/scenarios/manage-cluster-with-multiple-hubs.md
@@ -26,13 +26,13 @@ With this architecture, the managed cluster needs more resources, including CPUs
An example built with [kind](https://kind.sigs.k8s.io) and [clusteradm](https://github.com/open-cluster-management-io/clusteradm/releases) can be found in [Manage a cluster with multiple hubs](https://github.com/open-cluster-management-io/OCM/tree/main/solutions/multiple-hubs).
### Run the agents in the hosted mode on the hosting clusters
-By leveraging the hosted deployment mode, it's possible to run OCM agent outside of the managed cluster on a hosing cluster. The hosting cluster could be a managed cluster of the same hub.
+By leveraging the hosted deployment mode, it's possible to run OCM agent outside of the managed cluster on a hosting cluster. The hosting cluster could be a managed cluster of the same hub.
-At most one agent can runs in the default mode on the managed cluster in this solution.
+At most one agent can run in the default mode on the managed cluster in this solution.