Load/Stress Testing Review for Comdex === ###### tags: `Stress Test` `Load Test` :::info ### Tools Used - **Dell Vostro** - (16gb/dual core) - **[k6](https://k6.io)** for writing test cases ### Endpoints Tested - getCounterPartyReports **[GET]** - getTeamReports **[GET]** - getCounterPartiesOrders **[GET]** - getAssetReportsURL **[GET]** - postCommodityDetails **[POST, json]** - fileUploadReq **[POST, Multipart upload]** ::: ## Load Test Results ### Test Scene 1 - ![](https://i.imgur.com/taL63U6.png) ==NOTE== file upload request was wrong at this point, so none of the requests succedded :::success the above image translates to: Min concurrent requests: **4** max concurrent requests: **300** Total Time of Action: **2m30s** Success Rate: **91%** ::: Our testing environment, for load testing, replicated a real scenario, request numbers and frequency were set to much realistic stats. Basically, our test would have four stages * Stage 1 - 30s - 20 virtual users * Stage 2 - 1m - 40 virtual users * Stage 3 - 1m20s - 80 virtual users * Stage 4 - 1m30 - 100 virtual users Our environment would start from 4 VU's (virtual users), and slowly ramp up to 20 VU's for the first 30s. Then for the next 1 minute, it would double the load, i.e, 40 VU's and keep on progressing as explained above. This was a very mild test, so we it was expected to get high score on this test scene. ### Test Case 2 - ![](https://i.imgur.com/nJNSZ26.png) :::success the above image translates to: Min concurrent requests: **4** max concurrent requests: **500** Total Time of Action: **4m30s** Success Rate: **93%** ::: Our testing environment, for load testing, replicated a real scenario, request numbers and frequency were set to much realistic stats but much higher then in the previous test. In this run, our test would have five stages - * Stage 1 - 30s - 150 virtual users * Stage 2 - 30s - 400 virtual users * Stage 3 - 1m - 500 virtual users * Stage 4 - 1m - 200 virtual users * Stage 5 - 1m30s - 100 virtual users Our environment would start from 4 VU's (virtual users), and slowly ramp up to 150 VU's for the first 30s. Then for the next 30 seconds, it would increase the load to 2.25X, i.e, 400 VU's and keep on progressing as explained above. This was a very aggressive test, even for the instance, since this particular api backend is served from a 4gb/dual core instance. I was expecting around 70-80% of results but we got around **93%** of success rate :::info **FROM THE ABOVE STATS, WE CAN CLEARLY ASSUME THAT OUR CURRENT DEPLOYMENT CAN HANDLE AROUND 100-150 USERS WITH VERY LITTLE RATIO OF ERROR.** :::