# Locust
Locust is an open source load testing tool that can run on kubernetes clusters.
It consists of some helm resources and a ui where you can run the tests.
The following instructions have been copied from [this amazon blog](https://aws.amazon.com/blogs/containers/load-testing-your-workload-running-on-amazon-eks-with-locust/). However, because we don't need to do this testing from outside the cluster, instead of creating a new cluster, we create the locust resources inside our service's cluster. This means we skip most of the eks instructions.
## How to create
These instructions asssume that you are already running an eks cluster with some helm resources. If you do not have this, please check out the blog instructions. In particular, make sure you have these [addon charts](https://github.com/aws-samples/Load-testing-your-workload-running-on-Amazon-EKS-with-Locust/tree/main/groundwork/install-addon-chart)
### Helm chart repo setting
First, you will need to add the locust helm chart repo.
```bash=
helm repo add deliveryhero "https://charts.deliveryhero.io/"
helm repo update deliveryhero
```
You can check it was installed with this:
`helm repo list | egrep "NAME|deliveryhero"`
### Write a Locust file
```python=
# Execute in root of repository, and maybe this file has existed.
from locust import HttpUser, task, between
default_headers = {'Content-type': 'application/json'}
class WebsiteUser(HttpUser):
wait_time = between(1, 5)
@task(1)
def post_task_timeout(self):
self.client.post("/private/api/v2/algo/tmc-flow-duration/0/our-client/our-tenant", json={"flow-id-list": ["61036df86748803278515bf1", "622c297e0a37b609652857f5"],"expires-after": "2022-06-15 19:15:55.757"}, headers=default_headers)
```
This example locust file will hit `smart-flow`'s `tmc_flow_duration` route, but you can replace the route with whatever you'd like.
Just make sure that it includees whatever prefix your gloo routetable is matching on. In our case, it is `private` or `public`.
### Install and configure Locust with locustfile.py
#### Create the configmap
```bash=
# Create ConfigMap 'loadtest-locustfile' from the locustfile.py created above
kubectl create configmap loadtest-locustfile --from-file ./locustfile.py -n <your-stack-name>
```
#### Installing the helm chart
```bash=
# Install Locust Helm Chart
helm upgrade --install awsblog-locust -n <your-stack-name> deliveryhero/locust \
--set loadtest.name=eks-loadtest \
--set loadtest.locust_locustfile_configmap=loadtest-locustfile \
--set loadtest.locust_locustfile=locustfile.py \
--set worker.hpa.enabled=true \
--set worker.hpa.minReplicas=5
```
This will name your locust pods `awsblog-locust` but you can set the names to whatever you'd like.
### Accessing the UI
You can port-forward the Locust service to your local port to access the Locust dashboard without creating the Ingress resource.
```bash=
# Port forwarding from local to locust-service resource
kubectl --namespace <your-stack-name> port-forward service/awsblog-locust 8089:8089
```
If you are shelling into a remote machine, you will need to set up an ssh tunnel on your local machine like so:
```bash=
ssh -i ~/.ssh/<your_ssh_public_key> -N -L 8089:localhost:8089 <ip_address_of_remote_machine_or_alias>
```
With the port-forward and the ssh tunnel, you will be able to access the UI through your browser with address: [http://localhost:8089](http://localhost:8089)
### Checking the pods
After you get all this stuff up, check the pods in k9s to make sure they're up and working.
Check the master pod's logs first. If you are getting errors like this:
`Discarded report from unrecognized worker awsblog-locust-worker-dc8d5d6f5`
Delete and restart all the worker pods again.
Then after they are all created, delete and restart the master pod once again. This worked for me. If it doesn't work, then make sure you created the configmap correctly.
### How to test
After you've opened up the ui, it should be pretty straightforward. You can choose how many users to spawn at a time and also ramp up the number of users over a period of time.
### Changing the locust file
You could either edit the file locally and delete/update the helm chart again, or you can go in and manually edit the configmap which is where the locustfile lives.
### adding replicas
To see how your service fares under more replicas, you can edit the helmrelease. You can either live edit it in k9s or you can do it in application-flux which requires a push to the repo.