-
Notifications
You must be signed in to change notification settings - Fork 17
K8SPSMDB-1386 Improved restore doc #246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Added a subsection explaining restores with prefixes define for a bucket
f6c1f49
to
5dadc93
Compare
docs/backups-restore.md
Outdated
* set `spec.clusterName` key to the name of the target cluster to restore the backup on, | ||
* set `spec.backupName` key to the name of your backup, | ||
* set `spec.clusterName` key to the name of your cluster. When restoring to the same cluster where the backup was created, the cluster name will be identical in both the Backup and Restore objects. | ||
* set `spec.backupName` key to the name of your backup. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to convey the message that this value is from output of psmdb-backup
or the value used with CR PerconaServerMongoDBBackup
credentialsSecret: my-cluster-name-backup-s3 | ||
region: us-east-1 | ||
bucket: chetan-testing-percona | ||
prefix: my-prefix |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hors I remember we had a discussion about tool automatically picking the prefix from the destination . I am not sure if we discussed further on it, Did it change in the latest version by any chance ?
docs/backups-restore.md
Outdated
* `latest` - recover to the latest possible transaction | ||
* `date` key is used with `type=date` option and contains value in datetime format | ||
The resulting `restore.yaml` file may look as follows: | ||
* `date` - specify the target datetime when `type` is set to `date` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
of format YYYY-MM-DD hh:mm:ss
If there are other formats supported, please share @hors
``` {.bash data-prompt="$" } | ||
$ kubectl get psmdb | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to mention that pods will be deleted and recreated in the process of restore. This might cause down time:
Logical restore =>unsharded cluster, no downtime
Logical restore => sharded cluster, downtime for duration to restore the data and to refresh the sharding metadata on mongos
Physical restore => Downtime for the duration when data is being restored and to refresh the sharding metadata on mongos
Maybe @hors @igroene can correct or add more context if needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nastena1606 ,This might be more relevant, please rephrase it if needed.
Note that during the restore, the Operator may delete and recreate Pods depending on the type of restore. This may cause downtime.
As per the considerations of PBM , <Bold/Highlight> While the restore is running, prevent clients from accessing the database. <Bold/Highlight>
Assuming the mentioned considerations are strictly followed,
Logical restore in an unsharded cluster => causes downtime for the duration of the data restore . No Pods are Deleted and Recreated
Logical restore in a sharded cluster => causes downtime for the duration of the data restore and the time needed to refresh sharding metadata on mongos. Only Mongos Pods are Deleted and Recreated
Physical restore => causes downtime for the duration required to restore the data and refresh the sharding metadata on mongos. All the ReplicaSet ,ConfigServer(If present) and Mongos pods will be deleted and recreated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor changes are required
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kindly add the below for restore and we are all good
Note that during the restore, the Operator may delete and recreate Pods depending on the type of restore. This may cause downtime.
As per the considerations of PBM , <Bold/Highlight> While the restore is running, prevent clients from accessing the database. <Bold/Highlight>
Assuming the mentioned considerations are strictly followed,
Logical restore in an unsharded cluster => causes downtime for the duration of the data restore . No Pods are Deleted and Recreated
Logical restore in a sharded cluster => causes downtime for the duration of the data restore and the time needed to refresh sharding metadata on mongos. Only Mongos Pods are Deleted and Recreated
Physical restore => causes downtime for the duration required to restore the data and refresh the sharding metadata on mongos. All the ReplicaSet ,ConfigServer(If present) and Mongos pods will be deleted and recreated
Added a subsection explaining restores with prefixes define for a bucket