# Velero data-only restore design discussion
[TOC]
This document discusses how to implement [Velero’s data-only restore feature](https://github.com/vmware-tanzu/velero/issues/7345). There is already a [pull request](https://github.com/vmware-tanzu/velero/pull/7481) created as the design document.
The discussion mainly focuses on how to implement the data-only restore. There are two ways considered.
## Proposals
### In-place data-only restore
The first way is called the “in-place”, which means the data is restored to the existing volumes.
The pros:
* It’s possible to restore data without recreating the workload.
* It’s possible to support granular restore(only restoring filtered files) in the future.
* It’s possible to support no downtime during the restoration.
The cons:
* There is no easy way to support the snapshot-based backup because most storage providers can’t read data directly from the snapshot.
* There is no easy way to support the block-mode volumes.
* It’s hard to avoid data conflict with the pod’s running process during restoration.
### The replace data-only restore
The second way is called the “replace”, which means the data is restored into a brand-new volume.
The pros:
* It supports snapshot-based backups as well as filesystem-based backups.
* It’s easy to support rolling back the existing data and restoring the backed-up data.
The cons:
* Because data is restored into new volumes, mounting new volumes into the existing workloads is a tough task. Either the user needs to handle it manually, or Velero needs to modify the workload to mount the volumes as the new PVCs or the new PVs.
* It cannot cover granular restore(only restoring filtered files).
## Consensus
The community already has a consensus that the “in-place” and “replace” ways are valuable and useful for different user scenarios. Ideally, Velero should support both of them.
Because of the team’s bandwidth, we can only implement one of the two options in the short term, so we need to prioritize the two options, which should be implemented first.
## Continuing discussion
### In-place or replace
The first thing that needs to settle down for this feature is which way should be implemented in the first phase of this feature, the “in-place” way or the “replace” way.
At first, the consensus is that “in-place” data-only restore is more wanted, because:
* It can easily support k8s resources governed by external tools, such as GitOps.
* Granular restore(only restoring filtered files) supports.
But the “replace” way is preferred later. The reason is the “replace” way can support more scenarios.
As the [use case section](#Use-Cases-for-data-only-restore) displays, the `in-place` way of data-only restore can address more scenarios. Please continue to add scenarios in to it. We will prioritize on which way can solve more scenarios.
### Volume not mounted by pod scenario
Restore data into volumes that are not mounted by any pod is a scenario needs consideration for both in-place and replace way of data-only restore.
For the in-place way, because the Velero current filesystem-based volume restore needs to know the volume's mounting path on the node, non-mounting volumes don't have that. Velero need to create a dummy pod to do that.
For the replace way, because most of the PVC are created as the `WaitForFirstConsumer` fashion, the PVC's volume is not mounted until it's consumed by a pod. Velero also needs to consider using a dummy pod here.
### What is expected for in-place
This section describes what is expected for the in-place data-only restore if we choose it.
The idea of in-place is using the existing PodVolume mechanism. The PodVolume way will use the Kopia to download data into volumes. For the PodVolumeBackup volumes, the PodVolumeRestore is generated according to the PodVolumeBackup. For the CSI-data-moved volumes, Velero will generate the PodVolumeRestore by the DataUpload information.
At first, injecting an InitContainer into the data-only restore volume mounting pod is preferred. By injecting an InitContainer, the pod restarts, and the workload container will not start until the PodVolumeRestore process finishes. This can avoid the conflict between PodVolumeRestore and the workload container’s data reading and writing.
But this needs to modify the pod’s spec, which means the current pod should be deleted and recreated with modification. If there are controllers governing the pod, for example, Deployment and StatefulSet, there would be conflict. Both the controller and the data-only restore operation try to delete and recreate pods.
Even if the conflict is resolved, the pod downtime during restoration is still unwanted.
As a result, the proposal is that **the user needs to prepare to avoid the possible data conflict before running the data-only restore**. This is the most feasible way to achieve the no-downtime goal and not modify the workload.
A possible future enhancement is introducing a pre-hook for the Velero Restore. Users can use the pre-hook to run commands to prepare for restoring data, but the pre-hook is a new mechanism. We need to consider whether there is a need for the pre-hook, or if it is only useful for the in-place data-only restoration.
### What is expected for replace
This section describes what is expected for the “replace” data-only restore if we choose it.
The “replace” restore utilizes the existing Generic Data Path to create an intermediate pod to write the data into the target PV’s volume. After the data restoration completion, the target PV is unbound from the intermediate pod and then mounted to the target pod.
For the CSI-data-moved volumes, the DataDownload is generated upon the backup-created DataUpload. For the other cases, the DataDownload is generated according to the volumes’ backup method.
Because the pod’s spec, PVC’s spec, and the PV’s spec are immutable, deletion and recreation of the target pod are required to mount the new volume created from the data-only restore, and the controller needs modification to avoid conflict with the target pod’s controller. There are a lot of CRDs that can create pods. To simplify the problem, the proposal focuses on the k8s controllers, which are Deployment, Statefulset, and Daemonset. The CRDs are not supported.
## Use Cases for data-only restore
### Scenario 1
I have a cluster with Stateful App1 deployed. This App has CRDs (Stateless) and PVC (Stateful). I lost the cluster (hardware failure). I have provisioned a new cluster and deployed the App1 again, I want to restore data from the cluster where App1 was previously deployed.
* In this scenario, as the application is deployed on the second cluster freshly and users want to restore the data only, **in-place data-only restore** is suited for this scenario.
### Scenario 2
I have a cluster with Stateful App1 deployed. This App has CRDs (Stateless) and PVC (Stateful). I want to restore data to the last day.
* In this scenario, as the application is working and the user wants to restore the data only for business reasons, **replace data-only restore** is suited for this scenario so that if the restore job fails, it does not corrupt the latest data. As a workaround, user may take another backup just before restoring if there is no option for the replace data-only restore.
### Scenario 3
I have a cluster with Stateful App1 deployed. This App has CRDs (Stateless) and PVC (Stateful). I want to restore data to the last day on a new PVC in a different namespace.
* In this scenario, as the application is working and the user wants to restore data to validate software upgrades/patches, **in-place data-only restore** is suited for this scenario.
### Scenario 4
I have a cluster with Stateful App1 deployed. This App has CRDs (Stateless) and PVC (Stateful). I want to restore specific data to the last day.
* In this scenario, as the application is working and the user wants to restore specific data within a directory within the PVC for business reasons, **in-place data-only restore** is suited for this scenario so that if the restore job fails, it does not corrupt the latest data. As a workaround, user may take another backup just before restoring if there is no option for the replace data-only restore.
### Scenario 5
Data migration, users may want to migrate data across storage providers e.g. copy the data from an existing on-prem PVC into a cloud storage based PVC.
Velero should provide options for both scenarios. Based on user scenarios, both in-place data-only restore and replace data-only restore options should be available in Velero.
### Scenario 6
I have a cluster with Stateful App1 deployed. This App has CRDs (Stateless) and PVC (Stateful). The PVC is created as a block mode device. I lost the cluster (hardware failure). I have provisioned a new cluster and deployed the App1 again, I want to restore data from the cluster where App1 was previously deployed.
* In this scenario, as the application is deployed on the second cluster freshly and users want to restore the data only, because the PVC is in block mode, **replace data-only restore** is suited for this scenario.