# OADP operator
###### tags: `OADP`, `Debug`
https://github.com/openshift/oadp-operator
[E2E Upstream on AWS](https://docs.google.com/document/d/1yHXwCP5yGBcVvIwWzInjBeD5UB194RIxL1CW1s3Av6Y/edit)
[UnderStanding E2E](https://github.com/openshift/velero/tree/konveyor-dev/test/e2e)
## following readme
```
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.5 (Maipo)
[ec2-user@clientvm 0 ~/git/oadp-operator master ⭑|✔]$ oc version
Client Version: 4.8.0
Server Version: 4.8.0
Kubernetes Version: v1.21.1+f36aa36
```
## add to doc ( maybe )
sudo yum install make
~~sudo yum install golang~~
* go version go1.11.5 linux/amd64
* link to go download https://golang.org/dl/
```
wget https://golang.org/dl/go1.16.8.linux-amd64.tar.gz
sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.16.8.linux-amd64.tar.gz
sudo ln -s /usr/local/go/bin/go /usr/local/bin/
go version
```
### the go version matters
1.16
Check version from https://github.com/openshift/oadp-operator/blob/master/go.mod#L3
## install go deps
```
go mod tidy
go mod vendor
```
## execute unit tests
```
sudo yum install -y gcc
make test
```
## using an agnosticd deployment 4.x, so remove installed oadp operator
```
oc get namespaces | grep -i oadp
oc delete namespace/oadp-operator-system
```
## clean after other attemps
```
make undeploy
```
## README - make
```
make deploy
```
* build the containers from source prior to make deploy
* make docker-build
*
```
sudo yum install -y docker
sudo systemctl start docker
sudo groupadd docker
sudo usermod -aG docker $USER
sudo chmod 666 /var/run/docker.sock
sudo systemctl restart docker
sudo systemctl status docker
```
## check results of deploy
```
oc get all -n oadp-operator-system
OR
oc get all -n openshift-adp
NAME READY STATUS RESTARTS AGE
pod/oadp-operator-controller-manager-5d6fdff85c-whj2f 2/2 Running 0 45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.178.27 <none> 8443/TCP 45s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/oadp-operator-controller-manager 1/1 1 1 45s
NAME DESIRED CURRENT READY AGE
replicaset.apps/oadp-operator-controller-manager-5d6fdff85c 1 1 1 45s
```
## Creating a credentials secret
Create a secret file with the following content:
[default]
aws_access_key_id=
aws_secret_access_key=
### create the key file
```
oc create secret generic cloud-credentials --namespace oadp-operator-system --from-file cloud=/home/ec2-user/aws.creds
** new namespace ** ?
oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=/home/ec2-user/aws.creds
```
### results from oc secret
```
oc project oadp-operator-system
OR
oc project openshift-adp
oc get secrets | grep cloud-credentials
cloud-credentials Opaque 1 41s
```
~~### create required resources~~
~~oc create -f deploy/olm-catalog/bundle/manifests~~
## create Valero resources
* check the sample file cloud region, match your current region
```
oc create -n oadp-operator-system -f config/samples/oadp_v1alpha1_velero.yaml
OR
oc create -n openshift-adp -f config/samples/oadp_v1alpha1_velero.yaml
```
* my sample config
* alt bucket - valero-east-8675309
```
apiVersion: oadp.openshift.io/v1alpha1
kind: Velero
metadata:
name: velero-sample
namespace: oadp-operator-system OR openshift-adp
spec:
olmManaged: false
backupStorageLocations:
- default: true
backupSyncPeriod: 2m0s
provider: aws
objectStorage:
bucket: valero-8675309
prefix: valero
credential:
name: cloud-credentials
key: cloud
config:
profile: default
region: us-west-2
volumeSnapshotLocations:
- provider: aws
config:
region: us-west-2
profile: "default"
enableRestic: true
defaultVeleroPlugins:
- openshift
- aws
- csi
```
```
apiVersion: oadp.openshift.io/v1alpha1
kind: Velero
metadata:
name: velero-sample
spec:
# Add fields here
olmManaged: false
backupStorageLocations:
- provider: aws
default: true
objectStorage:
bucket: velero-61snip
prefix: velero
config:
region: us-east-1
profile: "default"
credential:
name: cloud-credentials
key: cloud
defaultVeleroPlugins:
- openshift
- aws
```

## logs
### kube-rbac-proxy
```
oc logs -f pod/oadp-operator-controller-manager-5d6fdff85c-bnmd8 -c kube-rbac-proxy
I0916 20:09:52.713760 1 main.go:190] Valid token audiences:
I0916 20:09:52.713904 1 main.go:262] Generating self signed cert as no cert is provided
I0916 20:09:53.482340 1 main.go:311] Starting TCP socket on 0.0.0.0:8443
I0916 20:09:53.482952 1 main.go:318] Listening securely on 0.0.0.0:8443
```
### manager
```
oc logs pod/oadp-operator-controller-manager-5d6fdff85c-tw4wb -c manager &> /tmp/manager1.log
```
Full log - https://termbin.com/7uz7
* snip
```
2021-09-17T15:25:41.815Z ERROR controller-runtime.manager.controller.velero Reconciler error {"reconciler group": "oadp.openshift.io", "reconciler kind": "Velero", "name": "velero-sample", "namespace": "oadp-operator-system", "error": "no matches for kind \"BackupStorageLocation\" in version \"velero.io/v1\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.5/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.9.5/pkg/internal/controller/controller.go:214
I0917 15:25:47.986837 1 request.go:665] Waited for 1.042095874s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/apiserver.openshift.io/v1?timeout=32s
2021-09-17T15:25:50.054Z DEBUG controller-runtime.manager.events Normal {"object": {"kind":"BackupStorageLocation","namespace":"oadp-operator-system","name":"velero-sample-1","uid":"e3939acd-41c1-4675-ae8f-5539c8096651","apiVersion":"velero.io/v1","resourceVersion":"1356345"}, "reason": "BackupStorageLocationReconciled", "message": "performed created on backupstoragelocation oadp-operator-system/velero-sample-1"}
2021-09-17T15:25:50.094Z DEBUG controller-runtime.manager.events Normal {"object": {"kind":"Deployment","namespace":"oadp-operator-system","name":"oadp-velero-sample-1-aws-registry","uid":"bebb868a-347d-4058-a79c-0565473b74c8","apiVersion":"apps/v1","resourceVersion":"1356347"}, "reason": "RegistryDeploymentReconciled", "message": "performed created on registry deployment oadp-operator-system/oadp-velero-sample-1-aws-registry"}
2021-09-17T15:25:50.116Z DEBUG controller-runtime.manager.events Normal {"object": {"kind":"BackupStorageLocation","namespace":"oadp-operator-system","name":"velero-sample-1","uid":"e3939acd-41c1-4675-ae8f-5539c8096651","apiVersion":"velero.io/v1","resourceVersion":"1356345"}, "reason": "RegistryServicesReconciled", "message": "performed created on service oadp-operator-system/oadp-velero-sample-1-aws-registry-svc"}
```
### follow up notes:
* from Tiger:
* you had to docker build and push an image then IMG=quay.io/... make deploy to actually use my branch
* and you still have to set olmManaged: false to utilize the fix
### E2E-tests
* need to update doc
#### Install Ginkgo
```
go get -u github.com/onsi/ginkgo/ginkgo
export PATH=$PATH:$HOME/go/bin/
```
#### Setup backup storage configuration
To get started, the test suite expects 2 files to use as configuration for Velero's backup storage. One file that contains your credentials, and another that contains additional configuration options (for now the name of the bucket).
The default test suite expects these files in /var/run/oadp-credentials, but can be overridden with the environment variables OADP_AWS_CRED_FILE and OADP_S3_BUCKET.
To get started, create these 2 files: OADP_AWS_CRED_FILE:
```
aws_access_key_id=<access_key>
aws_secret_access_key=<secret_key>
```
OADP_S3_BUCKET:
```
{
"velero-bucket-name": <bucket>
}
```
#### Run Tests
$ make test-e2e
##### Error #1
```
[ec2-user@clientvm 0 ~/git/oadp-operator olmManaged ⭑|✚3…3]$ make test-e2e
ginkgo -mod=mod tests/e2e/ -- -cloud=/var/run/oadp-credentials/aws-credentials \
-s3_bucket=/var/run/oadp-credentials/velero-bucket-name -velero_namespace=oadp-operator-system \
-creds_secret_ref=cloud-credentials \
-velero_instance_name=velero-sample \
-region=us-west-2 \
-provider=aws
--- FAIL: TestOADPE2E (0.00s)
e2e_suite_test.go:28: Error getting bucket json file: open /var/run/oadp-credentials/velero-bucket-name: no such file or directory
FAIL
Ginkgo ran 1 suite in 2.990948957s
Test Suite Failed
make: *** [test-e2e] Error 1
* fix cp /var/run/oadp-credentials/OADP_S3_BUCKET /var/run/oadp-credentials/velero-bucket-name
```
* fix did not work, comparing w/ CI
## CI
* https://prow.ci.openshift.org/?type=periodic&job=*oadp*
* COMPARE w/ log https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/logs/periodic-ci-openshift-oadp-operator-master-operator-e2e-gcp-periodic-slack/1439801561214619648/artifacts/operator-e2e-gcp-periodic-slack/e2e/build-log.txt
## walking scenario stateless
https://github.com/openshift/oadp-operator/blob/oadp-0.3.0/docs/examples/stateless.md
### To run locally needed to add
* /var/run/oadp-credentials/aws-credentials ( same format )
## test scenario
https://github.com/openshift/oadp-operator/blob/oadp-0.3.0/docs/examples/stateless.md
## DEBUG
```
oc get $(oc get clusterserviceversion -n openshift-adp -l operators.coreos.com/oadp-operator.openshift-adp -oname) -n openshift-adp -ojsonpath={.spec.install.spec.deployments[0].spec.template.spec.containers[0].resources}
```