# v1beta1 CRD Deprecation sync conclusions
Definition: Control Cluster is the cluster that runs the controller +
the mig CRD API
Remote Cluster: A cluster without a controller that can be used as part of
migrations, typicallly as a source cluster. This is really Velero.
Decisions made yesterday:
### We do NOT want to support two streams of MTC
We feel that supporting two "streams" of MTC (a "legacy", and a "current"),
will significantly incraese the compexity and maintenance burden to the point
that we should endeavor to avoid this if at all possible.
Specific issues: **<fill in>**
* If supporting two active streams, we'd have to support two sets of images and
that is a significant release engineering burden
If we MUST support 3.x control clusters, that's going to necessitate the dual
stream solution. For that reason, we'd like to drop support for 3.x control clusters.
We believe the amount of customers who this may affect is minimal, and for those
impacted, we can provide explicit network requirements such that the user
can use MTC deployed on a target 4.x cluster.
> NOTE: We also believe there may be some kind of "hole punch" solution with a
reverse egress proxy such that a source could TELL a target cluster exactly
where to find it.
### We should move to automated remote cluster setup
Today MTC is installed on 3.x manually by exporting a k8s manifest off of the
operator image and running an 'oc create -f' that installs the operator and
a v1beta1 CRD (MigrationController) that an operator local to the cluster
reconciles and configures and manages the installation.
We'd like to move away from this model to an alternative where the user no longer
manually installs MTC on their remote clusters. Instead, when a user creates
a MigCluster object on the control cluster, our controller (or other entity)
should have enough information and permissions to set up that remote cluster
in a dynamic manner. It would inspect the remote cluster and determine if
a legacy velero, or a current version of velero is necessary for the cluster
and install accordingly.
This buys us a few benefits:
* Manual updates no longer exist. *all* clusters can be automatically updated.
* It encapsulates and centralizes the logic around remote cluster setup and
codifies the steps for better control and less error prone manual setup.
* It eliminates the support burden around supporting the non-olm installation
method, which is not trivial
### We are going to accept the burden of maintaining a legacy flavor of Velero
Current expectation is that Velero 1.6.z is the last series of Velero releases
that will work with both v1beta1 CRDs (required for OCP 4.2 and older) and v1
CRDs (required for OCP 4.9 and newer). Starting with Velero 1.7, we expect that
v1beta1 CRDs will not be included. Once we have a version of MTC which supports
Velero 1.7, two things will be needed:
1) We will need two different sets of velero images (velero, velero-restic-restore-helper,
velero-plugin-for-aws, velero-plugin-for-gcp, velero-plugin-for-microsoft-azure),
one for 1.6.z and related plugins, and one for 1.7+ and related plugins. This
means each MTC release will need two different velero release branches (release-1.y.z
and release-1.y.z-velero-1.6), and we will need to test both of these with the
release.
2) The expectation is that Velero 1.6.z will be out-of-support upstream, which means
that we may need to backport CVE and certain bugfixes (essentially blocker bugs
that affect backup creation), since upstream CVE and bugfixes will only go to the
latest releases.
It should be noted that we are already running velero on 3.x clusters in an out-of-support
configuration, as Velero 1.6 is only supported on Kubernetes 1.12 and newer, but with
this change, we will be running no-longer-supported-at-all Velero on a kubernetes
version that this Velero release never officially supported. Essentially, we're on our
own in terms of bugfixes and other issues with it.
#### Alternative to maintaining legacy 1.6.z velero for OCP3 only
As an alternative to get around the problem of Velero 1.7+ no longer supporting v1beta1 CRDs,
we could also modify our velero fork release to generate those CRDs that upstream will stop
generating after 1.6. This doesn't reduce our maintenance burden, though. It merely shifts it
around. We will now be on the hook for maintaining CRD generation for v1beta1 CRDs for velero
releases that upstream has never generated them for. There is no guarantee that the upstream
releases in question won't make use of v1 CRD features that are incompatible with v1beta1. Beyond
CRD concerns, there's also no guarantee that upstream won't make code changes in other
areas that are no longer compatible with legacy kubernetes versions. The issue here is that
with this scenario, our maintenance burden extends to new feature development rather than simply
backporting bugfixes. When we previously discussed this option, the feeling was that this
alternative is actually riskier than the two-streams-of-velero approach.