diff --git a/docs/assets/images/gcs-service-account.svg b/docs/assets/images/gcs-service-account.svg new file mode 100644 index 00000000..d5b2bb7d --- /dev/null +++ b/docs/assets/images/gcs-service-account.svg @@ -0,0 +1,823 @@ + + + +my-service-accountAdd a new key pair or upload a public key certificate from an existing key pair.Block service account key creation using organization policies.Learn more about setting organization policies for service accountsKeysService account keys could pose a security risk if compromised. We recommend you avoid downloading service account keys and instead use theWorkload Identity Federation . You can learn more about the best way to authenticate service accounts on Google Cloud here .ADD KEY DETAILSPERMISSIONSKEYSMETRICSLOGS diff --git a/docs/backups-storage.md b/docs/backups-storage.md index e956d55d..27d4c9ce 100644 --- a/docs/backups-storage.md +++ b/docs/backups-storage.md @@ -155,6 +155,77 @@ You can use either make and use the *IAM instance profile*, or configure *IAM ro If IRSA-related credentials are defined, they have the priority over any IAM instance profile. S3 credentials in a secret, if present, override any IRSA/IAM instance profile related credentials and are used for authentication instead. +## Google Cloud storage + +To use [Google Cloud Storage (GCS) :octicons-link-external-16:](https://cloud.google.com/storage) as an object store for backups, you need the following information: + +* a GCS bucket name. Refer to the [GCS bucket naming guidelines :octicons-link-external-16:](https://cloud.google.com/storage/docs/buckets#naming) for bucket name requirements +* authentication keys for your service account in JSON format. + +!!! note + + You can still use the S3-compatible implementation of GCS with HMAC. Refer to the [Amazon S3 storage setup](#amazon-s3-or-s3-compatible-storage) section for steps. + +**Configuration steps** +{.power-number} + +1. [Create a service account :octicons-link-external-16:](https://cloud.google.com/iam/docs/service-accounts-create#iam-service-accounts-create-console), if you don't have it already. + +2. Add [JSON service keys for the service account :octicons-link-external-16:](https://cloud.google.com/iam/docs/creating-managing-service-account-keys). As the result a service account key file in JSON format with the private key and related information is automatically downloaded on your machine. + +3. Encode your keys in base64 format. You need to encode the service account email and the private key. You can get these values from the service account key file you downloaded when you created the service account keys. + + The following command shows how to encode a private key. Replace the placeholder with your private key and service account email: + + ```bash + echo -n "-----BEGIN PRIVATE KEY-----\nPRIVATE_KEY\n-----END PRIVATE KEY-----\n" | base64 + ``` + +4. Create the Kubernetes Secret configuration file and specify the encoded GCS credentials within: + + ```yaml title="gcp-cs-secret.yaml" + apiVersion: v1 + kind: Secret + metadata: + name: gcp-cs-secret-key + type: Opaque + data: + GCS_CLIENT_EMAIL: base_64_encoded_email + GCS_PRIVATE_KEY: base_64_encoded_key + ``` + +5. Create the Kubernetes Secrets object. Replace the `` placeholder with your value: + + ``` {.bash data-prompt="$" } + $ kubectl apply -f gcp-cs-secret.yaml -n + ``` + +6. Configure the GCS storage in the `deploy/cr.yaml` Custom Resource. Specify the following information: + + * Set `storages..type` to `gcs` (substitute the part + with some arbitrary name you will later use to refer this storage when + making backups and restores). + + * Specify the bucket name for the `storages..gcs.bucket` option + + * Specify the Secrets object name you created for the `storages..gcs.credentialsSecret` option + + ```yaml + backup: + storages: + gcp-cs: + type: gcs + gcs: + bucket: < GCS-BACKUP-BUCKET-NAME-HERE> + credentialsSecret: gcp-cs-secret + ``` + +7. Apply the configuration: + + ``` {.bash data-prompt="$" } + $ kubectl apply -f deploy/cr.yaml -n + ``` + ## Microsoft Azure Blob storage 1. To store backups on the Azure Blob storage, you need to create a @@ -206,7 +277,7 @@ You can use either make and use the *IAM instance profile*, or configure *IAM ro 2. Put the data needed to access the Azure Blob storage into the `backup.storages` subsection of the Custom Resource. - * `storages..type should be set to `azure` (substitute the part + * `storages..type` should be set to `azure` (substitute the part with some arbitrary name you will later use to refer this storage when making backups and restores). diff --git a/docs/operator.md b/docs/operator.md index 739bc681..69a034e6 100644 --- a/docs/operator.md +++ b/docs/operator.md @@ -480,7 +480,7 @@ Optional custom tags which can be added to the replset members to make their ide ### `replsets.externalNodes.host` -The URL or IP address of the [external replset instance](replication-main.md). +The URL or IP address of the [external replica set instance](replication-main.md). | Value type | Example | | ----------- | ---------- | @@ -2527,7 +2527,7 @@ Marks the storage as main. All other storages you define are added as profiles. ### `backup.storages.STORAGE-NAME.type` -The cloud storage type used for backups. Only `s3`, `azure`, and `filesystem` types are supported. +The cloud storage type used for backups. Only `s3`, `gcs`, `minio`, `azure`, and `filesystem` types are supported. | Value type | Example | | ----------- | ---------- | @@ -2623,7 +2623,7 @@ The [AWS region :octicons-link-external-16:](https://docs.aws.amazon.com/genera ### `backup.storages.STORAGE-NAME.s3.endpointUrl` -The URL of the S3-compatible storage to be used (not needed for the original Amazon S3 cloud). +The URL of the S3-compatible storage to be used. It is required for Minio storage and is not needed for the original Amazon S3 cloud. | Value type | Example | | ----------- | ---------- | @@ -2661,6 +2661,63 @@ The locally-stored base64-encoded custom encryption key used by the Operator for | ----------- | ---------- | | :material-code-string: string | `""` | +### `backup.storages.STORAGE-NAME.gcs.bucket` + +The name of the storage bucket. See the [GCS bucket naming guidelines :octicons-link-external-16:](https://cloud.google.com/storage/docs/naming-buckets#requirements) for bucket name requirements. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-string: string | `""` | + +### `backup.storages.STORAGE-NAME.gcs.prefix` + +The path to the data directory in the bucket. If undefined, backups are stored in the bucket's root directory. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-string: string | `""` | + +### `backup.storages.STORAGE-NAME.gcs.credentialsSecret` + +The [Kubernetes secret :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/secret/) for backups. It contains the GCS credentials as either the service account and JSON keys or HMAC keys. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-string: string | `"my-cluster-name-backup-gcs"` | + + +### `backup.storages.STORAGE-NAME.gcs.chunkSize` + +The size of data chunks in bytes to be uploaded to the GCS storage bucket in a single request. Larger data chunks will be split over multiple requests. Default data chunk size is 10MB. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-string: string | `10485760` | + +### `backup.storages.STORAGE-NAME.gcs.retryer.backoffInitial` + +The time to wait to make an initial retry, in seconds. Default value is 1 sec + +| Value type | Example | +| ----------- | ---------- | +| :material-code-int: int | `1` | + +### `backup.storages.STORAGE-NAME.gcs.retryer.backoffMax` + +The maximum amount of time between retries, in seconds. Default value is 30 sec. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-int: int | `30` | + +### `backup.storages.STORAGE-NAME.gcs.retryer.backoffMultiplier` + +Defines the time to increase the wait time after each retry. For example, with the default value of 2 seconds, if the first wait time is 1 second, the next will be 2 seconds, then 4 seconds, and so on, until it reaches the maximum. + +| Value type | Example | +| ----------- | ---------- | +| :material-code-int: int | `2` | + ### `backup.storages.STORAGE-NAME.azure.credentialsSecret` The [Kubernetes secret :octicons-link-external-16:](https://kubernetes.io/docs/concepts/configuration/secret/) for backups. It should contain `AZURE_STORAGE_ACCOUNT_NAME` and `AZURE_STORAGE_ACCOUNT_KEY` |