# Benchmarking workflow
## High level overview

## Running benchmark-operator
`benchmark-operator` doesn't need an Elasticsearch server backend, however to take advantage of all the functionality of the cloud-bulldozer ecosystem, it is highly recommended.
**Note:** The examples we will work through here will assume you do have Elasticsearch.
1) Deploy Cluster `[eks/gke/aks/openshift/etc]`
2) Install Prometheus `helm install prometheus prometheus-community/prometheus --namespace prometheus --namespace prometheus --create-namespace`
3) `git clone http://github.com/cloud-bulldozer/benchmark-operator`
4) `cd benchmark-operator`
5) `cd chart/benchmark-operator`
6) `helm install benchmark-operator . -n benchmark-operator --create-namespace`
7) Check to ensure all pods are running `kubectl get pods -A` should see pods in the prometheus and benchmark-operator namespaces.
8) Capture the endpoint address for prom `kubectl get endpoints -n prometheus`
9) Capture the prom token `kubectl get secrets -n prometheus $(kubectl get secrets -n prometheus | grep server-token | awk '{print $1}') -o go-template='{{index .data "ca.crt"}}'`
## Running Network data-plane benchmark
Continuing from above, we can now
10) To execute a network data-plane (stream) benchmark, use the below CR
```yaml=
---
apiVersion: ripsaw.cloudbulldozer.io/v1alpha1
kind: Benchmark
metadata:
name: uperf
namespace: benchmark-operator
spec:
metadata:
collection: true
targeted: false
elasticsearch:
url: https://user:pass@es.server.com:9200
system_metrics:
collection: true
es_url: https://user:pass@es.server.com:9200
prom_url: http://<prom endpoint>:9090
prom_token: <token from prom>
metrics_profile: https://gist.githubusercontent.com/jtaleric/c2a9c4974cd82c358ce4e1b5fd2d40f3/raw/f735fc9c5c58e3d4657689106d4c7db9400eb474/rook-node-v2.yaml # non-openshift specific collection
workload:
name: uperf #Benchmark name
args:
run_id: "rook-eks-cilium-01" # name the test
hostnetwork: false # Bypass pod network, pass nic to pod
serviceip: false # Place the server behind a service
debug: false
pin: true # set the pods on specific nodes
pin_server: "<server>" # node to pin the server to
pin_client: "<client>" # node to pin the client to
samples: 3 # Number of samples to collect
pair: 1 # Pairs of server/clients to run concurrently
nthrs: # list [ 1, 4, 8] - Multiple test iterations with different number of threads
- 1
protos: # protocols [tcp, udp]
- tcp
- udp
test_types: # tests to execute [stream, rr]
- stream
sizes:
- 1024
- 16384
runtime: 30
```
Once you decide your workload profile save the YAML file.
11) Run `kubectl create -f <YAML file>`
12) You can monitor the progress by runnnig `kubectl get benchmarks -n benchmark-operator -w`
13) During the monitoring, note the UUID, we will use this for the result analysis
## Result Analysis
1) `benchmark-comparison` allows us to query our Elasticsearch server for any given benchmark and do A/B comparison of result data, as well as system metrics during the workload (if enabled). `git clone http://github.com/cloud-bulldozer/benchmark-comparison`
2) `cd benchmark-comparison`
3) `pythom3 -m venv test`
4) `source test/bin/activate`
5) Install the benchmark-comparsion, `python3 setup.py develop`
6) To capture the result of a uperf-benchmark run `touchstone_compare -url https://user:pass@es-server:9200 -u <uuid> -a cilium --config config/uperf.json`
Example output from the above command
```+-----------+----------+--------------+-------------+----------------+--------------------+
| test_type | protocol | message_size | num_threads | metric | cilium |
+-----------+----------+--------------+-------------+----------------+--------------------+
| stream | tcp | 64 | 1 | avg(norm_byte) | 42664174.722222224 |
| stream | tcp | 1024 | 1 | avg(norm_byte) | 346032628.3111111 |
| stream | tcp | 16384 | 1 | avg(norm_byte) | 550649671.6966292 |
| stream | udp | 1024 | 1 | avg(norm_byte) | 103886210.84444444 |
| stream | udp | 64 | 1 | avg(norm_byte) | 6593835.146067415 |
| stream | udp | 16384 | 1 | avg(norm_byte) | 844857712.1797752 |
+-----------+----------+--------------+-------------+----------------+--------------------+
```