Skip to content

Conversation

nastena1606
Copy link
Collaborator

Added a subsection explaining restores with prefixes define for a bucket

Added a subsection explaining restores with prefixes define for a bucket
* set `spec.clusterName` key to the name of the target cluster to restore the backup on,
* set `spec.backupName` key to the name of your backup,
* set `spec.clusterName` key to the name of your cluster. When restoring to the same cluster where the backup was created, the cluster name will be identical in both the Backup and Restore objects.
* set `spec.backupName` key to the name of your backup.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to convey the message that this value is from output of psmdb-backup or the value used with CR PerconaServerMongoDBBackup

credentialsSecret: my-cluster-name-backup-s3
region: us-east-1
bucket: chetan-testing-percona
prefix: my-prefix
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hors I remember we had a discussion about tool automatically picking the prefix from the destination . I am not sure if we discussed further on it, Did it change in the latest version by any chance ?

* `latest` - recover to the latest possible transaction
* `date` key is used with `type=date` option and contains value in datetime format
The resulting `restore.yaml` file may look as follows:
* `date` - specify the target datetime when `type` is set to `date`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

of format YYYY-MM-DD hh:mm:ss
If there are other formats supported, please share @hors

``` {.bash data-prompt="$" }
$ kubectl get psmdb
```

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to mention that pods will be deleted and recreated in the process of restore. This might cause down time:
Logical restore =>unsharded cluster, no downtime
Logical restore => sharded cluster, downtime for duration to restore the data and to refresh the sharding metadata on mongos
Physical restore => Downtime for the duration when data is being restored and to refresh the sharding metadata on mongos

Maybe @hors @igroene can correct or add more context if needed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nastena1606 ,This might be more relevant, please rephrase it if needed.

Note that during the restore, the Operator may delete and recreate Pods depending on the type of restore. This may cause downtime.
As per the considerations of PBM , <Bold/Highlight> While the restore is running, prevent clients from accessing the database. <Bold/Highlight>

Assuming the mentioned considerations are strictly followed,

Logical restore in an unsharded cluster => causes downtime for the duration of the data restore . No Pods are Deleted and Recreated
Logical restore in a sharded cluster => causes downtime for the duration of the data restore and the time needed to refresh sharding metadata on mongos. Only Mongos Pods are Deleted and Recreated
Physical restore => causes downtime for the duration required to restore the data and refresh the sharding metadata on mongos. All the ReplicaSet ,ConfigServer(If present) and Mongos pods will be deleted and recreated

Copy link
Collaborator

@cshiv cshiv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor changes are required

Copy link
Collaborator

@cshiv cshiv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kindly add the below for restore and we are all good

Note that during the restore, the Operator may delete and recreate Pods depending on the type of restore. This may cause downtime.
As per the considerations of PBM , <Bold/Highlight> While the restore is running, prevent clients from accessing the database. <Bold/Highlight>

Assuming the mentioned considerations are strictly followed,

Logical restore in an unsharded cluster => causes downtime for the duration of the data restore . No Pods are Deleted and Recreated
Logical restore in a sharded cluster => causes downtime for the duration of the data restore and the time needed to refresh sharding metadata on mongos. Only Mongos Pods are Deleted and Recreated
Physical restore => causes downtime for the duration required to restore the data and refresh the sharding metadata on mongos. All the ReplicaSet ,ConfigServer(If present) and Mongos pods will be deleted and recreated

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pending release PR is ready to be included in the upcoming release
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants