OpenEBS Full Installation with 2 minio app deployed. 1 mino app i.e minio-1-deployment-745dcdc8bd-mc9qs on CSI volume(cspc v1 based) and other i.e. minio-deployment-77bddd4df4-kttfn on NON CSI volume(spc based) ```bash= k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get pod NAME READY STATUS RESTARTS AGE minio-1-deployment-745dcdc8bd-mc9qs 1/1 Running 0 6m31s minio-deployment-77bddd4df4-kttfn 1/1 Running 0 43m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minio-1-pv-claim Bound pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b 10Gi RWO openebs-csi-cstor 6m38s minio-pv-claim Bound pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6 10G RWO openebs-cstor-spc 43m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b 10Gi RWO Delete Bound default/minio-1-pv-claim openebs-csi-cstor 6m40s pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6 10G RWO Delete Bound default/minio-pv-claim openebs-cstor-spc 43m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get spc NAME AGE cstor-spc 47m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get csp NAME ALLOCATED FREE CAPACITY STATUS READONLY TYPE AGE cstor-spc-27up 664K 19.9G 19.9G Healthy false mirrored 47m cstor-spc-3d9r 586K 19.9G 19.9G Healthy false mirrored 47m cstor-spc-mb6u 576K 19.9G 19.9G Healthy false mirrored 47m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get cvr.openebs.io -n openebs NAME USED ALLOCATED STATUS AGE pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6-cstor-spc-27up 8.59M 220K Healthy 43m pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6-cstor-spc-3d9r 8.59M 220K Healthy 43m pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6-cstor-spc-mb6u 8.59M 220K Healthy 43m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get cspc -n openebs NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE cspc-mirror 3 3 3 17m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get cspi -n openebs NAME HOSTNAME ALLOCATED FREE CAPACITY READONLY TYPE STATUS AGE cspc-mirror-2fq2 worker3-ashutosh 314k 19300M 19300314k false mirror ONLINE 17m cspc-mirror-74zb worker2-ashutosh 1930k 19300M 19301930k false mirror ONLINE 17m cspc-mirror-qz9k worker1-ashutosh 309k 19300M 19300309k false mirror ONLINE 17m k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get cvr.cstor.openebs.io -n openebs NAME USED ALLOCATED STATUS AGE pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b-cspc-mirror-2fq2 7.86M 148K Healthy 7m27s pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b-cspc-mirror-74zb 7.86M 147K Healthy 7m27s pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b-cspc-mirror-qz9k 7.86M 147K Healthy 7m27s k8s@master1-ashutosh:~/cstor-operators/minio$ kubectl get pod -n openebs NAME READY STATUS RESTARTS AGE cspc-mirror-2fq2-6fc877d465-rkqn2 3/3 Running 0 18m cspc-mirror-74zb-b9cf84fb4-k84k4 3/3 Running 0 18m cspc-mirror-qz9k-7467b58679-lccm4 3/3 Running 0 18m cspc-operator-8c47ff9c6-kcghg 1/1 Running 0 45m cstor-spc-27up-77c7cd556-24j4s 3/3 Running 0 48m cstor-spc-3d9r-56477466d9-9zrnl 3/3 Running 0 48m cstor-spc-mb6u-5677d9b8d-4knpm 3/3 Running 0 48m cvc-operator-6ddd6fcdbb-nzsks 1/1 Running 0 45m maya-apiserver-5d5b7cd4c6-8ql2m 1/1 Running 0 56m openebs-admission-server-79d67c69db-99rnr 1/1 Running 0 56m openebs-cstor-admission-server-6755dd887c-9tgfg 1/1 Running 0 45m openebs-localpv-provisioner-8649f446cf-nh257 1/1 Running 0 56m openebs-ndm-h9b7f 1/1 Running 0 56m openebs-ndm-m4zd2 1/1 Running 0 56m openebs-ndm-operator-69957b7869-8c52b 1/1 Running 0 56m openebs-ndm-srgwq 1/1 Running 0 56m openebs-provisioner-b8bc47dd-tmj28 1/1 Running 0 56m openebs-snapshot-operator-6f4f7455f5-5qbrs 2/2 Running 0 56m pvc-5c06d85e-7da9-4048-b7e0-6fbdf883e05b-target-75bb95754dzdslb 3/3 Running 1 7m37s pvc-b35aabf1-f1b2-4ace-8f8f-fdbb41a2a1c6-target-649dc5c6b-dsh52 3/3 Running 0 44m ``` BackUp/Restore Step 1 : Backup using velero backup command ($: velero backup create harmonybackup --include-namespaces=default --snapshot-volumes --volume-snapshot-locations=default) ```bash= $: velero backup describe harmonybackup Name: harmonybackup Namespace: velero Labels: velero.io/storage-location=default Annotations: velero.io/source-cluster-k8s-gitversion=v1.18.0 velero.io/source-cluster-k8s-major-version=1 velero.io/source-cluster-k8s-minor-version=18 Phase: Completed Namespaces: Included: default Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Velero-Native Snapshot PVs: true TTL: 720h0m0s Hooks: <none> Backup Format Version: 1 Started: 2020-06-11 19:23:23 +0530 IST Completed: 2020-06-11 19:23:57 +0530 IST Expiration: 2020-07-11 19:23:23 +0530 IST Total items to be backed up: 142 Items backed up: 142 Velero-Native Snapshots: 2 of 2 snapshots completed successfully (specify --details for more information) ``` Step 2 : Deleted both minio application with thier volumes. Step 3: Restore Using velero restore command ($:velero restore create --from-backup harmonybackup --restore-volumes=true) ```bash= $ velero restore describe harmonybackup-20200611192544 Name: harmonybackup-20200611192544 Namespace: velero Labels: <none> Annotations: <none> Phase: Completed Warnings: Velero: <none> Cluster: persistentvolumes "pvc-6713ac1c-3419-43df-ae5f-bdf270636d16" not found persistentvolumes "pvc-9b7ddf35-b8c2-4673-8918-9fc284152886" not found Namespaces: default: could not restore, persistentvolumeclaims "minio-1-pv-claim" already exists. Warning: the in-cluster version is different than the backed-up version. could not restore, persistentvolumeclaims "minio-pv-claim" already exists. Warning: the in-cluster version is different than the backed-up version. could not restore, services "kubernetes" already exists. Warning: the in-cluster version is different than the backed-up version. Backup: harmonybackup Namespaces: Included: all namespaces found in the backup Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: true ``` Step 4 : Set the target IP (How to do that ? Follow this link https://github.com/openebs/velero-plugin#creating-a-restore-for-remote-backup) Voila! Both my minio application and their volumes came back and online. ```bash= k8s@master1-ashutosh:~$ kubectl get pod NAME READY STATUS RESTARTS AGE minio-1-deployment-745dcdc8bd-kp2zr 1/1 Running 0 17m minio-deployment-77bddd4df4-xtqkj 1/1 Running 0 17m ``` ```bash= k8s@master1-ashutosh:~$ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minio-1-pv-claim Bound pvc-e7990c7e-e1b5-41be-9444-9bb3a2f318bf 10Gi RWO openebs-csi-cstor 18m minio-pv-claim Bound pvc-6d54583d-a222-47f3-a407-7494b27c6ace 10G RWO openebs-cstor-spc 18m k8s@master1-ashutosh:~$ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-6d54583d-a222-47f3-a407-7494b27c6ace 10G RWO Delete Bound default/minio-pv-claim openebs-cstor-spc 18m pvc-e7990c7e-e1b5-41be-9444-9bb3a2f318bf 10Gi RWO Delete Bound default/minio-1-pv-claim openebs-csi-cstor 18m k8s@master1-ashutosh:~$ kubectl get cvr.openebs.io -n openebs NAME USED ALLOCATED STATUS AGE pvc-6d54583d-a222-47f3-a407-7494b27c6ace-cstor-spc-5bfh 7.89M 188K Healthy 18m pvc-6d54583d-a222-47f3-a407-7494b27c6ace-cstor-spc-6dth 7.89M 188K Healthy 18m pvc-6d54583d-a222-47f3-a407-7494b27c6ace-cstor-spc-wz25 7.89M 188K Healthy 18m k8s@master1-ashutosh:~$ kubectl get cvr.cstor.openebs.io -n openebs NAME USED ALLOCATED STATUS AGE pvc-e7990c7e-e1b5-41be-9444-9bb3a2f318bf-cspc-mirror-6mgc 6.72M 173K Healthy 18m pvc-e7990c7e-e1b5-41be-9444-9bb3a2f318bf-cspc-mirror-7xqq 6.83M 174K Healthy 18m pvc-e7990c7e-e1b5-41be-9444-9bb3a2f318bf-cspc-mirror-x6dh 7.05M 178K Healthy 18m k8s@master1-ashutosh:~$ ```
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up