|
| 1 | +# Backup Repository Cache Volume Design |
| 2 | + |
| 3 | +## Glossary & Abbreviation |
| 4 | + |
| 5 | +**Backup Storage**: The storage to store the backup data. Check [Unified Repository design][1] for details. |
| 6 | +**Backup Repository**: Backup repository is layered between BR data movers and Backup Storage to provide BR related features that is introduced in [Unified Repository design][1]. |
| 7 | +**Velero Generic Data Path (VGDP)**: VGDP is the collective of modules that is introduced in [Unified Repository design][1]. Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository. |
| 8 | +**Data Mover Pods**: Intermediate pods which hold VGDP and complete the data transfer. See [VGDP Micro Service for Volume Snapshot Data Movement][2] and [VGDP Micro Service For fs-backup][3] for details. |
| 9 | +**Repository Maintenance Pods**: Pods for [Repository Maintenance Jobs][4], which holds VGDP to run repository maintenance. |
| 10 | + |
| 11 | +## Background |
| 12 | + |
| 13 | +According to the [Unified Repository design][1] Velero uses selectable backup repositories for various backup/restore methods, i.e., fs-backup, volume snapshot data movement, etc. Some backup repositories may need to cache data on the client side for various repository operation, so as to accelerate the execution. |
| 14 | +In the existing [Backup Repository Configuration][5], we allow users to configure the cache data size (`cacheLimitMB`). However, the cache data is still stored in the root file system of data mover pods/repository maintenance pods, so stored in the root file system of the node. This is not good enough, reasons: |
| 15 | +- In many distributions, the node's system disk size is predefined, non configurable and limit, e.g., the system disk size may be 20G or less |
| 16 | +- Velero supports concurrent data movements in each node. The cache in each of the concurrent data mover pods could quickly run out of the system disk and cause problems like pod eviction, failure of pod creation, degradation of Kubernetes QoS, etc. |
| 17 | + |
| 18 | +We need to allow users to prepare a dedicated location, e.g., a dedictated volume, for the cache. |
| 19 | +Not all backup repositories or not all backup repository operations require cache, we need to define the details when and how the cache is used. |
| 20 | + |
| 21 | +## Goals |
| 22 | + |
| 23 | +- Create a mechanism for users to configure cache volumes for various pods running VGDP |
| 24 | +- Design the workflow to assign the cache volume pod path to backup repositories |
| 25 | +- Describe when and how the cache volume is used |
| 26 | + |
| 27 | +## Non-Goals |
| 28 | + |
| 29 | +- The solution is based on [Unified Repository design][1], [VGDP Micro Service for Volume Snapshot Data Movement][2] and [VGDP Micro Service For fs-backup][3], legacy data paths are not supported. E.g., when a pod volume restore (PVR) runs with legacy Restic path, if any data is cached, the cache still resides in the root file system. |
| 30 | + |
| 31 | +## Solution |
| 32 | + |
| 33 | +### Cache Data |
| 34 | + |
| 35 | +Varying on backup repositoires, cache data may include payload data or repository metadata, e.g., indexes to the payload data chunks. |
| 36 | + |
| 37 | +Payload data is highly related to the backup data, and normally take the majority of the repository data as well as the cache data. |
| 38 | + |
| 39 | +Repository metadata is related to the backup repository's chunking algorithm, data chunk mapping method, etc, and so the size is not proportional to the backup data size. |
| 40 | +On the other hand for some backup repository, in extreme cases, the repository metadata may be significantly large. E.g., Kopia's indexes are per chunks, if there are huge number of small files in the repository, Kopia's index data may be in the same level of or even larger than the payload data. |
| 41 | +However, in the cases that repository metadata data become the majority, other bottlenecks may emerge and concurrency of data movers may be significantly constrained, so the requirement to cache volumes may go away. |
| 42 | + |
| 43 | +Therefore, for now we only consider the cache volume requirement for payload data, and leave the consideration for metadata as a future enhancement. |
| 44 | + |
| 45 | +### Scenarios |
| 46 | + |
| 47 | +Backup repository cache varies on backup repositories and backup repository operation during VGDP runs. Below are the scenarios when VGDP runs: |
| 48 | +- Data Upload for Backup: this is the process to upload/write the backup data into the backup repository, e.g., DataUpload or PodVolumeBackup. The pieces of data is almost directly written to the repository, sometimes with a small group staying shortly in the local place. That is to say, there should not be large scale data cached for this scenario, so we don't prepare dedicated cache for this scenario. |
| 49 | +- Repository Maintenance: Repository maintenance most often visits the backup repository's metadata and sometimes it needs to visit the file system directories from the backed up data. On the other hand, it is not practical to run concurrent maintenance jobs in one node. So the cache data is neither large nor affect the root file system too much. Therefore, we don't need to prepare dedicated cache for this scenario. |
| 50 | +- Data Download for Restore: this is the process to download/read the backup data from the backup repository during restore, e.g., DataDownload or PodVolumeRestore. For backup repositories for which data are stored in remote backup storages (e.g., Kopia repository stores data in remote object stores), large scale of data are cached locally to accerlerate the restore. Therefore, we need dedicate cache volumes for this scenario. |
| 51 | +- Backup Deletion: During this scenario, backup repository is connected, metadata is enumerated to find the repository snapshot representing the backup data. That is to say, only metadata is cached if any. Therefore, dedicated cache volumes are not required in this scenario. |
| 52 | + |
| 53 | +The above analyses are based on the common behavior of backup repositories and they are not considering the case that backup repository metadata takes majority or siginficant proportion of the cache data. |
| 54 | +As a conclusion of the analyses, we will create dedicated cache volumes for restore scenarios. |
| 55 | +For other scenarios, we can add them regarded to the future changes/requirements. The mechanism to expose and connect the cache volumes should work for all scenarios. E.g., if we need to consider the backup repository metadata case, we may need cache volumes for backup and repository maintenance as well, then we can just reuse the same cache volume provision and connection mechanism to backup and repository maintenance scenarios. |
| 56 | + |
| 57 | +### Cache Data and Lifecycle |
| 58 | + |
| 59 | +If available, one cache volume is dedicately assigned to one data mover pod. That is, the cached data is destroyed when the data mover pod completes. Then the backup repository instance also closes. |
| 60 | +Cache data are fully managed by the specific backup repository. So the backup repository may also have its own way to GC the cache data. |
| 61 | +That is to say, cache data GC may be launched by the backup repository instance during the running of the data mover pod; then the left data are automatically destroyed when the data mover pod and the cache PVC are destroyed. So no specially logics are needed for cache data GC. |
| 62 | + |
| 63 | +### Data Size |
| 64 | + |
| 65 | +Cache volumes take storage space and cluster resources (PVC, PV), therefore, cache volumes should be created only when necessary and the volumes should be with reasonable size based on the cache data size: |
| 66 | +- It is not a good bargain to have cache volumes for small backups, small backups will use resident cache location (the cache location in the root file system) |
| 67 | +- The cache data size has a limit, the existing `cacheLimitMB` is used for this purpose. E.g., it could be set as 1024 for a 1TB backup, which means 1GB of data is cached and the old cache data exceeding this size will be cleared. Therefore, it is meaningless to set the cache volume size much larger than `cacheLimitMB` |
| 68 | + |
| 69 | +### Cache Volume Size |
| 70 | + |
| 71 | +The cache volume size is calculated from below factors (for Restore scenarios): |
| 72 | +- **Limit**: The limit of the cache data, that is represented by `cacheLimitMB`, the default value is 5GB |
| 73 | +- **backupSize**: The size of the backup as a reference to evaluate whether to create a cache volume. It doesn't mean the backup data really decides the cache data, it is just a reference to evaluate the scale of the backup, small scale backups may need small cache data |
| 74 | +- **ResidentThreshold**: The minimum backup size that a cache volume is created |
| 75 | +- **InflationPercentage**: Considering the overhead of the file system and the possible delay of the cache cleanup, there should be an inflation for the final volume size vs. the logical size, otherwise, the cache volume may be overrun. This inflation percentage is hardcoded, e.g., 20% |
| 76 | + |
| 77 | +A formula is as below: |
| 78 | +``` |
| 79 | +cacheVolumeSize = (backupSize > residentThreshold ? limit : 0) * inflationPercentage |
| 80 | +``` |
| 81 | +Finally, the `cacheVolumeSize` will be rounded up to GiB considering the UX friendliness, storage friendliness and management friendliness. |
| 82 | + |
| 83 | +### PVC/PV |
| 84 | + |
| 85 | +The PVC for a cache volume is created in Velero namespace and a storage class is required for the cache PVC. The PVC's accessMode is `ReadWriteOnce` and volumeMode is `FileSystem`, so the storage class provided should support this specification. Otherwise, if the storageclass doesn't support either of the specifications, the data mover pod may be hang in `Pending` state until a timeout setting with the data movement (e.g. `prepareTimeout`) and the data movement will finally fail. |
| 86 | + |
| 87 | +### Cache Volume Configurations |
| 88 | + |
| 89 | +Below configurations are introduced: |
| 90 | +- **residentThresholdMB**: the minimum data size(in MB) to be processed (if available) that a cache volume is created |
| 91 | +- **cacheStorageClass**: the name of the storage class to provision the cache PVC |
| 92 | + |
| 93 | +Not like `cacheLimitMB` which is set to and affect the backup repository, the above two configurations are actually data mover configurations of how to create cache volumes to data mover pods; and the two configurations don't need to be per backup repository. So we add them to the node-agent Configuration. |
| 94 | + |
| 95 | +### Sample |
| 96 | + |
| 97 | +Below are some examples of the node-agent configMap with the configurations: |
| 98 | + |
| 99 | +Sample-1: |
| 100 | +```json |
| 101 | +{ |
| 102 | + "cacheVolume": { |
| 103 | + "storageClass": "sc-1", |
| 104 | + "residentThresholdMB": 1024 |
| 105 | + } |
| 106 | +} |
| 107 | +``` |
| 108 | + |
| 109 | +Sample-2: |
| 110 | +```json |
| 111 | +{ |
| 112 | + "cacheVolume": { |
| 113 | + "storageClass": "sc-1", |
| 114 | + } |
| 115 | +} |
| 116 | +``` |
| 117 | + |
| 118 | +Sample-3: |
| 119 | +```json |
| 120 | +{ |
| 121 | + "cacheVolume": { |
| 122 | + "residentThresholdMB": 1024 |
| 123 | + } |
| 124 | +} |
| 125 | +``` |
| 126 | + |
| 127 | +**sample-1**: This is a valid configuration. Restores with backup data size larger than 1G will be assigned a cache volume using storage class `sc-1`. |
| 128 | +**sample-2**: This is a valid configuration. Data mover pods are always assigned a cache volume using storage class `sc-1`. |
| 129 | +**sample-3**: This is not a valid configuration because the storage class is absent. Velero gives up creating a cache volume. |
| 130 | + |
| 131 | +To create the configMap, users need to save something like the above sample to a json file and then run below command: |
| 132 | +``` |
| 133 | +kubectl create cm <ConfigMap name> -n velero --from-file=<json file name> |
| 134 | +``` |
| 135 | + |
| 136 | +The cache volume configurations will be visited by node-agent server, so they also need to specify the `--node-agent-configmap` to the `velero node-agent` parameters. |
| 137 | + |
| 138 | +## Detailed Design |
| 139 | + |
| 140 | +### Backup and Restore |
| 141 | + |
| 142 | +The restore needs to know the backup size so as to calculate the cache volume size, some new fields are added to the DataUpload, DataDownload, PodVolumeBackup and PodVolumeRestore CRDs. |
| 143 | + |
| 144 | +`snapshotSize` field is added to DataUpload and PodVolumeBackup's `status`: |
| 145 | +```yaml |
| 146 | + status: |
| 147 | + snapshotID: |
| 148 | + description: SnapshotID is the identifier for the snapshot in the |
| 149 | + backup repository. |
| 150 | + type: string |
| 151 | + snapshotSize: |
| 152 | + description: SnapshotSize is the logical size of the snapshot. |
| 153 | + format: int64 |
| 154 | + type: integer |
| 155 | +``` |
| 156 | +
|
| 157 | +`snapshotSize` field is also added to DataDownload and PodVolumeRestore's `spec`: |
| 158 | +```yaml |
| 159 | + spec: |
| 160 | + snapshotID: |
| 161 | + description: SnapshotID is the ID of the Velero backup snapshot to |
| 162 | + be restored from. |
| 163 | + type: string |
| 164 | + snapshotSize: |
| 165 | + description: SnapshotSize is the logical size of the snapshot. |
| 166 | + format: int64 |
| 167 | + type: integer |
| 168 | +``` |
| 169 | + |
| 170 | +`snapshotSize` represents the total size of the backup; during restore, the value is transferred from DataUpload/PodVolumeBackup to DataDownload/PodVolumeRestore. |
| 171 | + |
| 172 | +### Exposer |
| 173 | + |
| 174 | +Cache volume configurations are retrieved by node-agent and passed through DataDownload/PodVolumeRestore to GenericRestore exposer/PodVolume exposer. |
| 175 | +The exposers are responsible to calculate cache volume size, create cache PVCs and mount them to the restorePods. |
| 176 | +If the calculated cache volume size is 0, or any of the critical parameters is missing (e.g., cache volume storage class), the exposers ignore the cache volume configuration and continue with creating restorePods without cache volumes, so no impact to the result of the restore. |
| 177 | + |
| 178 | +Exposers mount the cache volume to a predefined directory and pass the directory to the data mover pods through the `cache-volume-path` parameter. |
| 179 | + |
| 180 | +Below data structure is added to the exposers' expose parameters: |
| 181 | + |
| 182 | +```go |
| 183 | +type GenericRestoreExposeParam struct { |
| 184 | + // RestoreSize specifies the data size for the volume to be restored |
| 185 | + RestoreSize int64 |
| 186 | +
|
| 187 | + // CacheVolume specifies the info for cache volumes |
| 188 | + CacheVolume *CacheVolumeInfo |
| 189 | +} |
| 190 | +
|
| 191 | +type PodVolumeExposeParam struct { |
| 192 | + // RestoreSize specifies the data size for the volume to be restored |
| 193 | + RestoreSize int64 |
| 194 | +
|
| 195 | + // CacheVolume specifies the info for cache volumes |
| 196 | + CacheVolume *repocache.CacheConfigs |
| 197 | +} |
| 198 | +
|
| 199 | +type CacheConfigs struct { |
| 200 | + // StorageClass specifies the storage class for cache volumes |
| 201 | + StorageClass string |
| 202 | +
|
| 203 | + // Limit specifies the maximum size of the cache data |
| 204 | + Limit int64 |
| 205 | +
|
| 206 | + // ResidentThreshold specifies the minimum size of the cache data to create a cache volume |
| 207 | + ResidentThreshold int64 |
| 208 | +} |
| 209 | +``` |
| 210 | + |
| 211 | +### Data Mover Pods |
| 212 | + |
| 213 | +Data mover pods retrieve the cache volume directory from `cache-volume-path` parameter and pass it to Unified Repository. |
| 214 | +If the directory is empty, Unified Repository uses the resident location for data cache, that is, the root file system. |
| 215 | + |
| 216 | +### Kopia Repository |
| 217 | + |
| 218 | +Kopia repository supports cache directory configuration for both metadata and data. The existing `SetupConnectOptions` is modified to customize the `CacheDirectory`: |
| 219 | + |
| 220 | +```go |
| 221 | +func SetupConnectOptions(ctx context.Context, repoOptions udmrepo.RepoOptions) repo.ConnectOptions { |
| 222 | + ... |
| 223 | +
|
| 224 | + return repo.ConnectOptions{ |
| 225 | + CachingOptions: content.CachingOptions{ |
| 226 | + CacheDirectory: cacheDir, |
| 227 | + ... |
| 228 | + }, |
| 229 | + ... |
| 230 | + } |
| 231 | +} |
| 232 | +``` |
| 233 | + |
| 234 | + |
| 235 | +[1]: Implemented/unified-repo-and-kopia-integration/unified-repo-and-kopia-integration.md |
| 236 | +[2]: Implemented/vgdp-micro-service/vgdp-micro-service.md |
| 237 | +[3]: Implemented/vgdp-micro-service-for-fs-backup/vgdp-micro-service-for-fs-backup.md |
| 238 | +[4]: Implemented/repo_maintenance_job_config.md |
| 239 | +[5]: Implemented/backup-repo-config.md |
0 commit comments