# OpenShift ArgoCD / Velero notes * Moving to https://github.com/weshayutin/gitops-oadp * ## steps 1. Install `Red Hat OpenShift GitOps` operator 2. Apply the clusterRoleBinding ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: namespace: openshift-gitops name: k8sadmin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: openshift-gitops-argocd-application-controller namespace: openshift-gitops ``` 3. Get Argo cli logged in and working route: ``` argoURL=$(oc get route openshift-gitops-server -n openshift-gitops -o jsonpath='{.spec.host}{"\n"}') ``` password: ``` argoPass=$(oc get secret/openshift-gitops-cluster -n openshift-gitops -o jsonpath='{.data.admin\.password}' | base64 -d) ``` login via cli: ``` argocd login --insecure --grpc-web $argoURL --username admin --password $argoPass ``` 4. create project if needed **Note** I've had to delete and recreate for the app sync to be successful ``` oc get appproject -n openshift-gitops NAME AGE default 3m54s ``` ``` apiVersion: argoproj.io/v1alpha1 kind: AppProject metadata: name: default namespace: openshift-gitops spec: sourceRepos: - '*' destinations: - namespace: '*' server: '*' clusterResourceWhitelist: - group: '*' kind: '*' ``` ## here 4. Update ArgoCD Server Deployment spec ``` oc get argocd openshift-gitops -n openshift-gitops ``` add the following ``` spec: resourceExclusions: | - apiGroups: clusters: - '*' kinds: - PersistentVolumeClaim - PersistentVolumes ``` 5. Create a sample app ( with manifests in one directory ) https://github.com/weshayutin/oadp-operator/tree/argocd/tests/e2e/sample-applications/argocd/mysql-persistent-aws ``` apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: demoapp-mysql namespace: openshift-gitops spec: project: default source: repoURL: https://github.com/weshayutin/oadp-operator.git path: tests/e2e/sample-applications/argocd/mysql-persistent-aws/ targetRevision: argocd destination: server: 'https://kubernetes.default.svc' namespace: mysql-persistent syncPolicy: syncOptions: - CreateNamespace=true retry: limit: 1 backoff: duration: 5s factor: 2 maxDuration: 3m0s ``` 6. If applications will not synchronize then. ``` oc get pods -n openshift-gitops -n openshift-gitops oc delete pod openshift-gitops-application-controller-0 -n openshift-gitops ``` 7. Initial deployment ``` oc create -f argocd/mysql-persistent-aws/mysql-persistent-twovol-csi.yaml -f argocd/mysql-persistent-aws/aws.yaml ``` 8. Wait for the application to come online ``` [whayutin@thinkdoe sample-applications]$ oc get all -n mysql-persistent NAME READY STATUS RESTARTS AGE pod/mysql-7bc95589b4-wg95p 1/1 Running 0 8m11s pod/todolist-1-deploy 0/1 Completed 0 8m11s pod/todolist-1-hpf4f 1/1 Running 0 8m8s NAME DESIRED CURRENT READY AGE replicationcontroller/todolist-1 1 1 1 8m11s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172.30.236.41 <none> 3306/TCP 8m11s service/todolist ClusterIP 172.30.179.175 <none> 8000/TCP 8m11s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 1/1 1 1 8m11s NAME DESIRED CURRENT READY AGE replicaset.apps/mysql-7bc95589b4 1 1 1 8m11s NAME REVISION DESIRED CURRENT TRIGGERED BY deploymentconfig.apps.openshift.io/todolist 1 1 1 config NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD route.route.openshift.io/todolist-route todolist-route-mysql-persistent.apps.cluster-wdh05252023a.wdh05252023a.mg.dog8code.com / todolist <all> None ``` 9. Add some data to the todolist 10. Take a backup ``` apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-persistent-argo-1 namespace: openshift-adp spec: includedNamespaces: - mysql-persistent includedResources: - pv - pvc storageLocation: dpa-sample-1 ttl: 720h0m0s oc create -f argocd/mysql-backup.yaml ``` 11. Wait for the backup to complete 12. Delete the namespace ``` oc delete namespace mysql-persistent ``` 13. Restore the velero backup ``` apiVersion: velero.io/v1 kind: Restore metadata: name: mysql-persistent-argo-restore namespace: openshift-adp spec: backupName: mysql-persistent-argo-1 restorePVs: true oc create -f argocd/mysql-restore.yaml ``` 14. Sync the Argo app 15. Done! # Tiger's notes - Got this working by allowing PVC pending to be healthy - Giving cluster-admin to - <wes> I was able to deploy mysql-persistent w/o changing the health check https://termbin.com/da2a </wes> ArgoCD Kind Mod to consider PVC pending as healthy and unblock sync ```diff resourceCustomizations: | PersistentVolumeClaim: health.lua: | hs = {} if obj.status ~= nil then if obj.status.phase ~= nil then if obj.status.phase == "Pending" then hs.status = "Healthy" hs.message = obj.status.phase return hs end if obj.status.phase == "Bound" then hs.status = "Healthy" hs.message = obj.status.phase return hs end end end hs.status = "Progressing" hs.message = "Waiting for PVC" return hs ``` Ideas: ignore pvc in argo, restore pvc with velero, argo to sync deployments/deploymentconfigs/etc.. Velero backup spec that works... perhaps need to add namespace if there is ever a UID issue on apps without explicit SCC. ``` spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - mysql-persistent includedResources: - pv - pvc itemOperationTimeout: 1h0m0s storageLocation: velero-sample-1 ttl: 720h0m0s ``` #### To solve volumesnapshots getting deleted during restore set resourceTrackingMethod to annotations+label. https://argo-cd.readthedocs.io/en/stable/user-guide/resource_tracking/#choosing-a-tracking-method