# Akri Bug Bash - Sept 2023
###### tags: `Bug Bash`
The goal of this bug bash is to discover any bugs before Akri creates a new release. This release contains major changes like configuration-level resource support and improvement of Onvif discovery handler for uuid filtering and authenticated access.
Please leave a comment in the [scenario outcomes](#Scenario-Outcomes) section with the scenario's you tested and whether it was successful. If you find issues, please create an issue on Akri's [GitHub](https://github.com/project-akri/akri) and comment it in the [discovered issues](#Discovered-Issues-or-Enhancements) section.
As always, feel free to post any questions on Akri's [Slack](https://kubernetes.slack.com/messages/akri).
## Background
Akri is an Open Source project that automates the discovery and usage of IoT devices around Kubernetes clusters on the Edge. Akri can automatically deploy user-provided workloads to the discovered devices. It handles device appearance and disappearance, allowing device deployments to expand and contract and enabling high resource utilization.
## Setting Up an Environment
Akri is regularly tested on K3s, MicroK8s, and standard Kubernetes clusters versioned 1.16-1.21 (see [previous release](https://github.com/project-akri/akri/releases) for list of exact versions tested) with Ubuntu 20.04 node. While we only test on these K8s distributions, feel free to try it out on the distribution and Linux OS of your choice. Here are some examples of what you can do:
- Hyper-V Ubuntu 20.04 VM
- Set up Linux VM with cloud provider
- Try out Akri on a managed Kubernetes service
## Scenarios
Choose any of the following scenarios (none are pre-requisite of the others). Make sure to use the **akri-dev chart** (`helm install akri akri-helm-charts/akri-dev`) when installing Akri with Helm. If you have previously installed akri, be sure to run `helm repo update`.
### Scenario A: Validate the configuration-level resource support with debug echo
```bash
# add akri helm charts repo
helm repo add akri-helm-charts https://project-akri.github.io/akri/
# ensure helm repos are up-to-date
helm repo update
```
Set up the Kubernetes distribution being used, here we use 'k8s', make sure to replace it with a value that matches the Kubernetes distribution you used
```bash
export AKRI_HELM_CRICTL_CONFIGURATION="--set kubernetesDistro=k8s"
```
Install an Akri configuration named `akri-debug-echo-foo` that uses debug echo discovery handler
```bash
helm install akri-debug-echo akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set debugEcho.discovery.enabled=true \
--set debugEcho.configuration.name=akri-debug-echo-foo \
--set debugEcho.configuration.enabled=true \
--set debugEcho.configuration.capacity=5 \
--set debugEcho.configuration.shared=true \
--set debugEcho.configuration.brokerPod.image.repository="nginx" \
--set debugEcho.configuration.brokerPod.image.tag="stable-alpine"
```
The command installs an Akri configuration with debug echo discovery handler, which will discover 2 debug echo devices (`foo0` and `foo1`), capacity is `5`, each pod request `1` instance-level resource.
Here is the result of running the installation command above on a cluster with 1 control plane and 2 work nodes. There are 2 pods running on each node.
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-debug-echo-foo 5 2m58s
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-debug-echo-foo-8120fe akri-debug-echo-foo true ["kube-02","kube-03"] 2m44s
instance.akri.sh/akri-debug-echo-foo-a19705 akri-debug-echo-foo true ["kube-02","kube-03"] 2m45s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-gk29m 1/1 Running 0 2m58s
pod/akri-agent-daemonset-rzc88 1/1 Running 0 2m58s
pod/akri-controller-deployment-7d786778cf-9mcfh 1/1 Running 0 2m58s
pod/akri-debug-echo-discovery-daemonset-4dhl2 1/1 Running 0 2m58s
pod/akri-debug-echo-discovery-daemonset-jd677 1/1 Running 0 2m58s
pod/akri-webhook-configuration-6b4f74c4cc-zkszc 1/1 Running 0 2m58s
pod/kube-02-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 2m44s
pod/kube-02-akri-debug-echo-foo-a19705-pod 1/1 Running 0 2m45s
pod/kube-03-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 2m44s
pod/kube-03-akri-debug-echo-foo-a19705-pod 1/1 Running 0 2m45s
```
#### Request configuration-level resources exceeds the limit
Now deploy a deployment that requests 3 configuration-level resources in a container, since only 2 instances are available, the deployment will be in 'Pending' state.
Create a yaml file `nginx-deployment-3-resource.yaml` to deploy a Deployment
```bash
cat > /tmp/nginx-deployment-3-resource.yaml<< EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-3-resource
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
resources:
limits:
"akri.sh/akri-debug-echo-foo": "3"
EOF
```
Deploy the deployment and check the pod status, it should be in `Pending` state due to insufficient resources.
```bash=
$ kubectl apply -f /tmp/nginx-deployment-3-resource.yaml
deployment.apps/nginx-deployment-3-resource created
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-debug-echo-foo 5 18m
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-debug-echo-foo-8120fe akri-debug-echo-foo true ["kube-02","kube-03"] 18m
instance.akri.sh/akri-debug-echo-foo-a19705 akri-debug-echo-foo true ["kube-02","kube-03"] 18m
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-gk29m 1/1 Running 0 18m
pod/akri-agent-daemonset-rzc88 1/1 Running 0 18m
pod/akri-controller-deployment-7d786778cf-9mcfh 1/1 Running 0 18m
pod/akri-debug-echo-discovery-daemonset-4dhl2 1/1 Running 0 18m
pod/akri-debug-echo-discovery-daemonset-jd677 1/1 Running 0 18m
pod/akri-webhook-configuration-6b4f74c4cc-zkszc 1/1 Running 0 18m
pod/kube-02-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 18m
pod/kube-02-akri-debug-echo-foo-a19705-pod 1/1 Running 0 18m
pod/kube-03-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 18m
pod/kube-03-akri-debug-echo-foo-a19705-pod 1/1 Running 0 18m
pod/nginx-deployment-3-resource-5bc97ffc44-r5rtd 0/1 Pending 0 4s
$ kubectl describe pod nginx-deployment-3-resource-5bc97ffc44-r5rtd
Name: nginx-deployment-3-resource-5bc97ffc44-r5rtd
Namespace: default
Priority: 0
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient akri.sh/akri-debug-echo-foo. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
```
Delete the deployment
```bash
kubectl delete -f /tmp/nginx-deployment-3-resource.yaml
```
#### Request configuration-level resources within the limit
Now deploy a deployment that requests 1 configuration-level resources in a container, the deployment should succeed.
Copy the yaml and save it to a file named `nginx-deployment-1-resource.yaml`
```bash
cat > /tmp/nginx-deployment-1-resource.yaml<< EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1-resource
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
resources:
limits:
"akri.sh/akri-debug-echo-foo": "1"
EOF
```
Deploy the deployment and check the pod status, the pod should be in `Running` state.
```bash=
$ kubectl apply -f /tmp/nginx-deployment-1-resource.yaml
deployment.apps/nginx-deployment-1-resource created
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-debug-echo-foo 5 27m
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-debug-echo-foo-8120fe akri-debug-echo-foo true ["kube-02","kube-03"] 27m
instance.akri.sh/akri-debug-echo-foo-a19705 akri-debug-echo-foo true ["kube-02","kube-03"] 27m
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-gk29m 1/1 Running 0 27m
pod/akri-agent-daemonset-rzc88 1/1 Running 0 27m
pod/akri-controller-deployment-7d786778cf-9mcfh 1/1 Running 0 27m
pod/akri-debug-echo-discovery-daemonset-4dhl2 1/1 Running 0 27m
pod/akri-debug-echo-discovery-daemonset-jd677 1/1 Running 0 27m
pod/akri-webhook-configuration-6b4f74c4cc-zkszc 1/1 Running 0 27m
pod/kube-02-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 27m
pod/kube-02-akri-debug-echo-foo-a19705-pod 1/1 Running 0 27m
pod/kube-03-akri-debug-echo-foo-8120fe-pod 1/1 Running 0 27m
pod/kube-03-akri-debug-echo-foo-a19705-pod 1/1 Running 0 27m
pod/nginx-deployment-1-resource-6844748f48-zpfb7 1/1 Running 0 9s
```
Configuration-level resources and instance-level resources share the same set of device usage slots, so if in the Configuration the capacity is 5 and 2 devices are discovered, then the total number of virtual devices can be used is `5 * 2 = 10`. Total allocated configuration-level resource + allocated instance-level resource cannot exceed 10. If the requested resource count exceeds the available resource count, the pod will be pending waiting for resource becomes available.
We can check the resource usage on each node to see the Configuration-level and Instance-level resource number. In the example below, the Configuration-level resource is allocated on node `kube-03` and
it's mapped to Instance `akri-debug-echo-foo-8120fe`, so the Allocatable resource for `akri-debug-echo-foo-8120fe` is 3 (5 - 1 Instance-level - 1 Configuration-level). Compare it to the allocatable resource number of Instance `akri-debug-echo-foo-a19705` (4, 5 - 1 Instance-level) that only 1 resource claimed at Instance-level.
```bash=
$ kubectl describe node kube-02
Name: kube-02
...
Capacity:
akri.sh/akri-debug-echo-foo: 2
akri.sh/akri-debug-echo-foo-8120fe: 5
akri.sh/akri-debug-echo-foo-a19705: 5
...
Allocatable:
akri.sh/akri-debug-echo-foo: 2
akri.sh/akri-debug-echo-foo-8120fe: 3
akri.sh/akri-debug-echo-foo-a19705: 4
Allocated resources:
...
akri.sh/akri-debug-echo-foo 0 0
akri.sh/akri-debug-echo-foo-8120fe 1 1
akri.sh/akri-debug-echo-foo-a19705 1 1
$ kubectl describe node kube-03
Name: kube-03
...
Capacity:
akri.sh/akri-debug-echo-foo: 3
akri.sh/akri-debug-echo-foo-8120fe: 5
akri.sh/akri-debug-echo-foo-a19705: 5
...
Allocatable:
akri.sh/akri-debug-echo-foo: 3
akri.sh/akri-debug-echo-foo-8120fe: 3
akri.sh/akri-debug-echo-foo-a19705: 4
...
Allocated resources:
akri.sh/akri-debug-echo-foo 1 1
akri.sh/akri-debug-echo-foo-8120fe 1 1
akri.sh/akri-debug-echo-foo-a19705 1 1
...
```
#### Clean up
Delete deployment and Akri installation to clean up the system.
```bash
kubectl delete -f /tmp/nginx-deployment-1-resource.yaml
helm delete akri-debug-echo
kubectl delete crd configurations.akri.sh
kubectl delete crd instances.akri.sh
```
### Scenario B: ONVIF Discovery Handler Filtering
**Make sure you have at least one Onvif camera that is reachable so Onvif discovery handler can discovery your Onvif camera**.
#### IP/MAC Address Filtering
Use the following helm chart to deploy an Akri Configuration that uses ONVIF discovery handler and see if filtering works properly. We don't specify any IP/MAC addresses filters (which requires username/password credential), the Onvif discovery handler should be able to properly discover your ONVIF device.
```bash
# add akri helm charts repo
helm repo add akri-helm-charts https://project-akri.github.io/akri/
# ensure helm repos are up-to-date
helm repo update
```
Set up the Kubernetes distribution being used, here we use 'k8s', make sure to replace it with a value that matches the Kubernetes distribution you used
```bash
export AKRI_HELM_CRICTL_CONFIGURATION="--set kubernetesDistro=k8s"
```
Install an Akri configuration named `akri-onvif` that uses Onvif discovery handler
```bash
helm install akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.discovery.enabled=true \
--set onvif.configuration.name=akri-onvif \
--set onvif.configuration.enabled=true \
--set onvif.configuration.capacity=3 \
--set onvif.configuration.brokerPod.image.repository="nginx" \
--set onvif.configuration.brokerPod.image.tag="stable-alpine"
```
The command installs an Akri configuration with Onvif discovery handler, which will discover Onvif cameras connected to your network, no matter the cameras are authenticated enabled or not.
Here is the result of running the installation command above on a cluster with 1 control plane and 2 work nodes. There is one Onvif camera connects to the network, thus 1 pods running on each node.
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-onvif 3 62s
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-onvif-029957 akri-onvif true ["kube-03","kube-02"] 48s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-gnwb5 1/1 Running 0 62s
pod/akri-agent-daemonset-zn2gb 1/1 Running 0 62s
pod/akri-controller-deployment-56b9796c5-wqdwr 1/1 Running 0 62s
pod/akri-onvif-discovery-daemonset-wcp2f 1/1 Running 0 62s
pod/akri-onvif-discovery-daemonset-xml6t 1/1 Running 0 62s
pod/akri-webhook-configuration-75d9b95fbc-wqhgw 1/1 Running 0 62s
pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 48s
pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 48s
```
Environment variables `ONVIF_DEVICE_IP_ADDRESS_<instance ID>` and `ONVIF_DEVICE_MAC_ADDRESS_<instance ID>` are optional and are only created if the Onvif discovery handler can get the address of the camera. You can dump the environment variables from the container to check the values of environment variables. Below is an example, the Onvif discovery handler cannot get the ip/mac addresses of the Onvif camera, but it still can discover the camera, but only the device uuid is exposed. **Write down the device uuid for later use**.
```bash=
$ kubectl exec -i kube-02-akri-onvif-029957-pod -- /bin/sh -c "printenv|grep ONVIF_DEVICE"
ONVIF_DEVICE_SERVICE_URL_029957=http://192.168.1.145:2020/onvif/device_service
ONVIF_DEVICE_UUID_029957=3fa1fe68-b915-4053-a3e1-ac15a21f5f91
```
#### UUID Filtering
Use the following command to upgrade the helm chart installation to enable UUID filtering of your devices. You can put `Include` in the action to see if you can still discover this device or `Exclude` to see if the discovery handler will now filter out this device and not discover it.
```bash
# Exclude the Onvif camera uuid=3fa1fe68-b915-4053-a3e1-ac15a21f5f91
helm upgrade akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.discovery.enabled=true \
--set onvif.configuration.enabled=true \
--set onvif.configuration.capacity=3 \
--set onvif.configuration.brokerPod.image.repository="nginx" \
--set onvif.configuration.brokerPod.image.tag="stable-alpine" \
--set onvif.configuration.discoveryDetails.uuids.action="Exclude" \
--set onvif.configuration.discoveryDetails.uuids.items[0]="3fa1fe68-b915-4053-a3e1-ac15a21f5f91"
```
No camera is discovered as there is only one Onvif camera connects to the network and it's excluded.
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-onvif 3 3m21s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-2pgbs 1/1 Running 0 3m21s
pod/akri-agent-daemonset-j4szm 1/1 Running 0 3m21s
pod/akri-controller-deployment-56b9796c5-4v6lz 1/1 Running 0 3m21s
pod/akri-onvif-discovery-daemonset-f55bl 1/1 Running 0 3m21s
pod/akri-onvif-discovery-daemonset-stzhr 1/1 Running 0 3m21s
pod/akri-webhook-configuration-75d9b95fbc-4g4j2 1/1 Running 0 3m20s
```
Now upgrade the helm installation to include the camera uuid in the uuid filter.
```bash
# Include the Onvif camera uuid=3fa1fe68-b915-4053-a3e1-ac15a21f5f91
helm upgrade akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.discovery.enabled=true \
--set onvif.configuration.enabled=true \
--set onvif.configuration.capacity=3 \
--set onvif.configuration.brokerPod.image.repository="nginx" \
--set onvif.configuration.brokerPod.image.tag="stable-alpine" \
--set onvif.configuration.discoveryDetails.uuids.action="Include" \
--set onvif.configuration.discoveryDetails.uuids.items[0]="3fa1fe68-b915-4053-a3e1-ac15a21f5f91"
```
One camera is discovered.
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-onvif 3 10m
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-onvif-029957 akri-onvif true ["kube-03","kube-02"] 9s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-2pgbs 1/1 Running 0 10m
pod/akri-agent-daemonset-j4szm 1/1 Running 0 10m
pod/akri-controller-deployment-56b9796c5-4v6lz 1/1 Running 0 10m
pod/akri-onvif-discovery-daemonset-f55bl 1/1 Running 0 10m
pod/akri-onvif-discovery-daemonset-stzhr 1/1 Running 0 10m
pod/akri-webhook-configuration-75d9b95fbc-4g4j2 1/1 Running 0 10m
pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 9s
pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 9s
```
#### Clean up
Delete deployment and Akri installation to clean up the system.
```bash
helm delete akri
kubectl delete crd configurations.akri.sh
kubectl delete crd instances.akri.sh
```
### Scenario C: Passing credentials to discover authenticated devices
Make sure you have at least one Onvif camera that is reachable so Onvif discovery handler can discovery your Onvif camera. To test accessing Onvif with credentials, make sure your Onvif camera is authentication-enabled. **Write down the username and password**, they are required in the test flow.
#### Acquire Onvif camera's device uuid
First use the following helm chart to deploy an Akri Configuration that uses ONVIF discovery handler and see if your camera is discovered.
```bash
# add akri helm charts repo
helm repo add akri-helm-charts https://project-akri.github.io/akri/
# ensure helm repos are up-to-date
helm repo update
```
Set up the Kubernetes distribution being used, here we use 'k8s', make sure to replace it with a value that matches the Kubernetes distribution you used
```bash
export AKRI_HELM_CRICTL_CONFIGURATION="--set kubernetesDistro=k8s"
```
Install an Akri configuration named `akri-onvif` that uses debug echo discovery handler
```bash
helm install akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.discovery.enabled=true \
--set onvif.configuration.name=akri-onvif \
--set onvif.configuration.enabled=true \
--set onvif.configuration.capacity=3 \
--set onvif.configuration.brokerPod.image.repository="nginx" \
--set onvif.configuration.brokerPod.image.tag="stable-alpine"
```
Here is the result of running the installation command above on a cluster with 1 control plane and 2 work nodes. There is one Onvif camera connects to the network, thus 1 pods running on each node.
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-onvif 3 62s
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-onvif-029957 akri-onvif true ["kube-03","kube-02"] 48s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-gnwb5 1/1 Running 0 62s
pod/akri-agent-daemonset-zn2gb 1/1 Running 0 62s
pod/akri-controller-deployment-56b9796c5-wqdwr 1/1 Running 0 62s
pod/akri-onvif-discovery-daemonset-wcp2f 1/1 Running 0 62s
pod/akri-onvif-discovery-daemonset-xml6t 1/1 Running 0 62s
pod/akri-webhook-configuration-75d9b95fbc-wqhgw 1/1 Running 0 62s
pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 48s
pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 48s
```
Dump the environment variables from the container to check the device uuid from the container's environment variables. Below is an example, the Onvif discovery handler discovers the camera and expose the device's uuid. **Write down the device uuid for later use**. Note that in real product scenarios, the device uuids are acquired directly from the vendors or already known before installing Akri Configuration.
```bash=
$ kubectl exec -i kube-02-akri-onvif-029957-pod -- /bin/sh -c "printenv|grep ONVIF_DEVICE"
ONVIF_DEVICE_SERVICE_URL_029957=http://192.168.1.145:2020/onvif/device_service
ONVIF_DEVICE_UUID_029957=3fa1fe68-b915-4053-a3e1-ac15a21f5f91
```
#### Set up Kubernetes secrets
Now we can set up the credential information to Kubernetes Secret. Replace the device uuid and the values of username/password with information of your camera.
```bash
cat > /tmp/onvif-auth-secret.yaml<< EOF
---
apiVersion: v1
kind: Secret
metadata:
name: onvif-auth-secret
type: Opaque
stringData:
device_credential_list: |+
[ "credential_list" ]
credential_list: |+
{
"3fa1fe68-b915-4053-a3e1-ac15a21f5f91" :
{
"username" : "camuser",
"password" : "HappyDay"
}
}
EOF
# add the secret to cluster
kubectl apply -f /tmp/onvif-auth-secret.yaml
```
#### Upgrade the Akri configuration
Upgrade the Akri Configuration to include the secret information and the sample video broker container.
```bash
helm upgrade akri akri-helm-charts/akri-dev \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.discovery.enabled=true \
--set onvif.configuration.enabled=true \
--set onvif.configuration.capacity=3 \
--set onvif.configuration.discoveryProperties[0].name=device_credential_list \
--set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.name=onvif-auth-secret \
--set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.namesapce=default \
--set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.key=device_credential_list \
--set onvif.configuration.discoveryProperties[0].valueFrom.secretKeyRef.optoinal=false \
--set onvif.configuration.brokerPod.image.repository="ghcr.io/project-akri/akri/onvif-video-broker" \
--set onvif.configuration.brokerPod.image.tag="latest-dev" \
--set onvif.configuration.brokerPod.image.pullPolicy="Always" \
--set onvif.configuration.brokerProperties.CREDENTIAL_DIRECTORY="/etc/credential_directory" \
--set onvif.configuration.brokerProperties.CREDENTIAL_CONFIGMAP_DIRECTORY="/etc/credential_cfgmap_directory" \
--set onvif.configuration.brokerPod.volumeMounts[0].name="credentials" \
--set onvif.configuration.brokerPod.volumeMounts[0].mountPath="/etc/credential_directory" \
--set onvif.configuration.brokerPod.volumeMounts[0].readOnly=true \
--set onvif.configuration.brokerPod.volumes[0].name="credentials" \
--set onvif.configuration.brokerPod.volumes[0].secret.secretName="onvif-auth-secret"
```
With the secret information, the Onvif discovery handler is able to discovery the Onvif camera and the video broker is up and running
```bash=
$ kubectl get nodes,akric,akrii,pods
NAME STATUS ROLES AGE VERSION
node/kube-01 Ready control-plane 22d v1.26.1
node/kube-02 Ready <none> 22d v1.26.1
node/kube-03 Ready <none> 22d v1.26.1
NAME CAPACITY AGE
configuration.akri.sh/akri-onvif 3 18m
NAME CONFIG SHARED NODES AGE
instance.akri.sh/akri-onvif-029957 akri-onvif true ["kube-03","kube-02"] 22s
NAME READY STATUS RESTARTS AGE
pod/akri-agent-daemonset-bq494 1/1 Running 0 18m
pod/akri-agent-daemonset-c2rng 1/1 Running 0 18m
pod/akri-controller-deployment-56b9796c5-rtm5q 1/1 Running 0 18m
pod/akri-onvif-discovery-daemonset-rbgwq 1/1 Running 0 18m
pod/akri-onvif-discovery-daemonset-xwjlp 1/1 Running 0 18m
pod/akri-webhook-configuration-75d9b95fbc-cr6bc 1/1 Running 0 18m
pod/kube-02-akri-onvif-029957-pod 1/1 Running 0 22s
pod/kube-03-akri-onvif-029957-pod 1/1 Running 0 22s
# dump the logs from sample video broker
$ kubectl logs kube-02-akri-onvif-029957-pod
[Akri] ONVIF request http://192.168.1.145:2020/onvif/device_service http://www.onvif.org/ver10/device/wsdl/GetService
[Akri] ONVIF media url http://192.168.1.145:2020/onvif/service
[Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetProfiles
[Akri] ONVIF profile list contains: profile_1
[Akri] ONVIF profile list contains: profile_2
[Akri] ONVIF profile list profile_1
[Akri] ONVIF request http://192.168.1.145:2020/onvif/service http://www.onvif.org/ver10/media/wsdl/GetStreamUri
[Akri] ONVIF streaming uri list contains: rtsp://192.168.1.145:554/stream1
[Akri] ONVIF streaming uri rtsp://192.168.1.145:554/stream1
[VideoProcessor] Processing RTSP stream: rtsp://----:----@192.168.1.145:554/stream1
info: Microsoft.Hosting.Lifetime[0]
Now listening on: http://[::]:8083
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
Ready True
Adding frame from rtsp://----:----@192.168.1.145:554/stream1, Q size: 1, frame size: 862986
Adding frame from rtsp://----:----@192.168.1.145:554/stream1, Q size: 2, frame size: 865793
Adding frame from rtsp://----:----@192.168.1.145:554/stream1, Q size: 2, frame size: 868048
Adding frame from rtsp://----:----@192.168.1.145:554/stream1, Q size: 2, frame size: 869655
Adding frame from rtsp://----:----@192.168.1.145:554/stream1, Q size: 2, frame size: 871353
```
#### Deploying the sample video streaming application
Deploy the sample video streaming application Instructions described from the step 4 of [camera demo](https://docs.akri.sh/demos/usb-camera-demo#inspecting-akri)
Deploy a video streaming web application that points to both the Configuration and Instance level services that were automatically created by Akri.
Copy and paste the contents into a file and save it as `akri-video-streaming-app.yaml`
```bash
cat > /tmp/akri-video-streaming-app.yaml<< EOF
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: akri-video-streaming-app
spec:
replicas: 1
selector:
matchLabels:
app: akri-video-streaming-app
template:
metadata:
labels:
app: akri-video-streaming-app
spec:
serviceAccountName: akri-video-streaming-app-sa
containers:
- name: akri-video-streaming-app
image: ghcr.io/project-akri/akri/video-streaming-app:latest-dev
imagePullPolicy: Always
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
runAsNonRoot: true
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
# Streamer works in two modes; either specify the following commented
# block of env vars to explicitly target cameras (update the <id>s for
# your specific cameras) or
# specify a Akri configuration name to pick up cameras automatically
# - name: CAMERAS_SOURCE_SVC
# value: "akri-udev-video-svc"
# - name: CAMERA_COUNT
# value: "2"
# - name: CAMERA1_SOURCE_SVC
# value: "akri-udev-video-<id>-svc"
# - name: CAMERA2_SOURCE_SVC
# value: "akri-udev-video-<id>-svc"
- name: CONFIGURATION_NAME
value: akri-onvif
---
apiVersion: v1
kind: Service
metadata:
name: akri-video-streaming-app
namespace: default
labels:
app: akri-video-streaming-app
spec:
selector:
app: akri-video-streaming-app
ports:
- name: http
port: 80
targetPort: 5000
type: NodePort
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: akri-video-streaming-app-sa
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: akri-video-streaming-app-role
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: akri-video-streaming-app-binding
roleRef:
apiGroup: ""
kind: ClusterRole
name: akri-video-streaming-app-role
subjects:
- kind: ServiceAccount
name: akri-video-streaming-app-sa
namespace: default
EOF
```
Deploy the video stream app
```bash
kubectl apply -f /tmp/akri-video-streaming-app.yaml
```
Determine which port the service is running on. **Save this port number for the next step**:
```bash
kubectl get service/akri-video-streaming-app --output=jsonpath='{.spec.ports[?(@.name=="http")].nodePort}' && echo
```
SSH port forwarding can be used to access the streaming application. In a new terminal, enter your ssh command to to access your machine followed by the port forwarding request. The following command will use port 50000 on the host. Feel free to change it if it is not available. Be sure to replace `<streaming-app-port>` with the port number outputted in the previous step.
```bash=
ssh someuser@<machine IP address> -L 50000:localhost:<streaming-app-port>
```
Navigate to http://localhost:50000/ using browser. The large feed points to Configuration level service, while the bottom feed points to the service for each Instance or camera.
#### Clean up
Close the page http://localhost:50000/ from the browser
Delete the sample streaming application resources
```bash
kubectl delete -f /tmp/akri-video-streaming-app.yaml
```
Delete the Secret information
```bash
kubectl delete -f /tmp/onvif-auth-secret.yaml
```
Delete deployment and Akri installation to clean up the system.
```bash
helm delete akri
kubectl delete crd configurations.akri.sh
kubectl delete crd instances.akri.sh
```
### Scenario D: Documentation Walkthrough
It would be great to walk through the documentation with the bug bash and note which changes to docs we would need to make. There are some pending PRs on the documentations as well that go with the release.
## Discovered Issues or Enhancements
## Scenario Outcomes
Please write the environment you used (Kubernetes distro/version and VM), the scenarios you tested, and whether it was a success or had issues.
| Environment | Scenario | Success/Issue |
|--------------|-----------|---------------|
| N/A | Documentation Walkthrough | Created issues [#82](https://github.com/project-akri/akri-docs/issues/82) [#83](https://github.com/project-akri/akri-docs/issues/83) [#84](https://github.com/project-akri/akri-docs/issues/84) [#85](https://github.com/project-akri/akri-docs/issues/85) [#87](https://github.com/project-akri/akri-docs/issues/87) and PR [#86](https://github.com/project-akri/akri-docs/pull/86) |
| k3s 1.26.4 1 node cluster | Scenario A, B, C | Success |
| k8s 1.25.7 single node cluster | Scenario A: Configuration level resources | Success |
| | | |
| | | |