## Sidecar Benchmarks vs Rate Limit Questions
Regarding Sidecar Benchmarks and the [devops questions about Rate Limit](https://github.com/paritytech/devops/issues/2812), we have to answer 3 questions :
1. RPS - The maximum number of requests the application can handle
2. Number of Expected Requests - The number of expected requests
3. Request Size - Are all requests the same size? We can configure 2-3 rate limits for different paths (not tested yet).
### Data used & Other Files
- I used the results from Sidecar Benchmarks run by Tarik on `30/08/2023`.
- Based on this info I created a [Google Sheet](https://docs.google.com/spreadsheets/d/1FUrLxdUy9gKFehMkgYh_jT5muXR7cooxhPlafI-mH_U/edit?usp=sharing) that show some results and grouped information.
### 1. RPS
Based on the benchmarks results, the application can handle different amount of Requests Per Second (based on the different endpoints). So to have a better view :
- I created the sheet `1. RPS per endpoint` (in this excel [file](https://docs.google.com/spreadsheets/d/1FUrLxdUy9gKFehMkgYh_jT5muXR7cooxhPlafI-mH_U/edit?usp=sharing)) and grouped the endpoints per `Resource Intensity`
- *we do not have a normal distribution of the values so I quickly created some equal ranges (approx)*
- Endpoints marked with 5 are the most resource intensive since their RPS (*Requests/sec, column named "Value"*) result from the benchmarks are low (based on my assumption that this low value means they can handle few requests per second).
- Endpoints marked with 4 are the least resource intensive since their RPS (*Requests/sec, column named "Value"*) result from the benchmarks are high (based on my assumption that this high value means they can handle more requests per second).
- I am a little bit confused though on why the endpoint `/transaction/material` or `/runtime/spec`has a very low RPS since the information retrieved does not need any calculation or should not be intensive. So based on that maybe my above assumptions are wrong ?
#### Caution
- An interesting metric from the benchmarks is the `Stdev` and `+/- Stdev` which I started to check but I realise it will take a lot of time to have some meaningful results so I stopped analysing that.
- But just to mention that in a lot of cases, the percentage (+/- Stdev) of the values that deviate from the mean is quite high (>90%) and also the Stdev is quite high which (if I understand correctly) means that we have outliers so in our case in the API -> Spikes.
- I am not sure if this affects somehow the Rate Limit we should set.
#### Note
- From the Benchmarks we also have the `Latency` metric but after checking some values there I decided not to use it since for some quick results I think RPS and Latency can be "interchangeable". From what I understand :
- RPS is a metric that shows performance keeping as a constant the time (per second)
- Latency (the amount of time it takes for requested information to move from the API server to the party making the request) is a metric that shows performance again but keeping as a constant the request.
### 2. Number of Expected Requests
For this, I guess the numbers should be taken from the Grafana dashboards that the devops team shared. I am not sure how to calculate something since they have a better view of the current usage (shared [here](https://github.com/paritytech/devops/issues/2812#issuecomment-1683529526)).
### 3. Request Size
- From the Benchmarks we also have the metric `Transfer/sec` but for `Request Size` I think it is better to use the data read, e.g. this line in the results :
> 14811 requests in 30.10s, 1.83GB read
- The results are in the tab `2. Request Size` and again are grouped based on the size of data read per request.