# Deliverable 4 # Test Plan We designed the test plan simulating the behavior of a user while interacting with the system as follows. Each **virtual user** should start by visiting the home page of the application and logging into the system only once. Then, at each iteration, the user performs different actions with different likelihoods: 1. Navigate to the accounts page 100% of the time; 2. Request a new account type 2% of the time; 3. Navigate to the transaction page 100% of the time; 4. Create a new transaction 50% of the time such that if the Chequing account has a balance of more than $50.00, transfer $50.00 from the Chequing account to the Savings account, otherwise transfer 100% of the money on the Saving account to the Chequing account. ## Overview ![](https://i.imgur.com/WBhykN3.png) >The entire Test Plan used for testing, some elements like results tree would be disabled for the actual test. ## Preparation Before creating the jmeter test plan, the users required for the test were generated and added to the CSV files that populates the database for testing. This was done using a Python script and an public web API that generates random users data. This Jupyter Notebook is attached with this report as as separate file (`seng426_users_generation.ipynb`). The Performance Monitor plugin for jmeter was also installed to collect system metrics like CPU and RAM usage. A _HTTP Header Manager_ and _HTTP Request Defaults_ to use for all iterations/users of the test plan. These contain information to simplify further requests, such as the IP/port for requests, as well as header data to be sent with requests, such as the Auth Token. ## User Thread Between these steps, a timer was added to simulate think time, with a randomized time between a few seconds to simulate the difference in time taken for a user. To manage the incrementing of users, a _Thread Group_ was used, with a target of 200 threads and ramp up period of 1 second. For the Stress Test, this was upped to 500 threads. ### Login Request A _HTTP Request_ is defined where the user is A _HTTP Request_ with a POST request to the _authenticate_ api endpoint. To enable login and return of the Auth Token, extra elements were used in tandem with this: * Login Data: A _CSV Data Set Config_, to draw username and passwords from a csv file containg such data, to then be used in the HTTP Request * Assert Token Returned: A *JSON Assertion* to act as validation that a token was returned from the request * Retrive Token: A *JSON extractor* that pulls the Auth Token Returned by the request and stores it in a jmeter variable. ### Accounts View A _HTTP Request_ with a GET request to the _accounts_ endpoint to retrieve all accounts belonging to an authorized user. To facilitate further steps, a number of helper elements were used: * Assert ID Returned: A _JSON Assertion_ to verify the step was successful by checking info was returned by the request * Retrieve Chequing/Savings Account ID: Two _JSON Extractors_ to retrieve savings and chequing account IDs from the JSON data returned from the GET. This was extracted using a JSON path of `$.[?(@.accountType=="Savings")].accountID` and `$.[?(@.accountType=="Checking")].accountID` this would then be used for transaction creation * Retrieve Chequing Account Amount: A _JSON Extractor_ to extract the amount within the Chequing account. This used JSON path `$.[?(@.accountType=="Checking")].accountID` this is then used to decide between the $50 and 100% transaction path. ### New Account This path for users was gated by a _throughput controller_ set to 2% #### Request New Account Type A _HTTP Request_ with a POST to send JSON data with the required data for a new account. #### Assert 201 Returned A Response Assertion to ensure this POST returned a 201 "Created success" status was returned ### Transaction View Reqest A _HTTP Request_ to simulate a user browsing viewing transactions. With this, a JSON Assertion was used to ensure the step ran correctly. ### New Transaction This path for users was gated by a _throughput controller_ set to 50% #### Check branches A _HTTP Request_ to simulate a user browsing viewing branches. With this, a JSON Assertion was used to ensure the step ran correctly. #### Create $50 Transaction This stage was set behind a _if controller_ to check to see if the amount retrieved in the accounts view stage was greater than $50. If it was it would use a _HTTP request_ stage to send a POST request with JSON data to make a new transaction. Specifically, the POST was sent as such: ```json {"amount":"50","payeeCheckBox":[],"toAccount1":"${savID}","fromAccount":"${cheqID}","toAccount":"${savID}"} ``` Where `cheqID` and `savID` were jmeter vars storing the relative account IDs #### Create 100% Transaction Similar to above, this was set behind a _if controller_ to check if the amount was $50 or less. If it was, it would to the same as above, but with the POST request of: ```json {"amount":"${cheqAmnt}","payeeCheckBox":[],"toAccount1":"${savID}","fromAccount":"${cheqID}","toAccount":"${savID}"} ``` with the `cheqAmnt` being the amount in the chequing account collected earlier. ## Data Collection For testing and graphing stages, the following steps were added to retrieve data relative to the performance of the system. * Aggreate Report: To act as a general collector for error rates and response times for stages * Graph Results: To retrive basic graphs on the above data * Perfmon Metrics Collector: For the PrefMon Metrics Collector, to collect CPU and memory usage Post-collection, the following was also used on the JTL for this Report * Active Threads Graph: To collect the number of Active threads over time. For some of these graphs, the CSV data was exported to create more readable graphs in Excel. # Load Test ## Setup Two systems are used to setup the load test. One system acts as the client and runs jmeter in non-GUI mode. The system specifications are the following: - Intel i7-7700HQ 2.80 GHz (4 Core / 8 Thread) (Neptune Bank) - Intel i5-2500K 3.30 (4 Core / 4 Thread) (jmeter) Both systems are connected via a Ethernet LAN network, have 16 GB of system memory and use a SSD for the system drive. The following commands were used to run the jmeter test. This was used to run the test plan. ```bash ./jmeter -n -t LOAD.jmx -l LOAD.jtl ``` The next was used to start ServerAgent on the system hosting the NeptuneBank application. ``` startAgent.bat ``` An older version of Java (8) had to be installed due to a bug with running this program on Windows. ## Response Times | | Accounts View | Check Branches | Create $50 Transaction | Create %100 Transaction | Login | Request New Account | Transaction View | All | |-----|---------------|----------------|------------------------|-------------------------|----------|---------------------|------------------|-----| | MAX | 2061 | 934 | 1699 | 1181 | 2573 | 918 | 1250 | 2573 | AVG | 46.97627435 | 17.61609112 | 38.56669573 | 48.92926113 | 1733.025 | 18.61576011 | 36.83903676 | 38.82 | MIN | 2 | 2 | 7 | 8 | 553 | 4 | 2 | 2 ![](https://i.imgur.com/5pRIQ0j.png) ## Error Rate | | Accounts View Request | Check Branches | Create $50 Transaction | Create %100 Transaction | Login Request | Request New Acount Type | Transaction View Request | |-----------------|-----------------------|----------------|------------------------|-------------------------|---------------|-------------------------|--------------------------| | Num. Errors | 1167 | 573 | 4125 | 368 | 2 | 25 | 1167 | | Num.Total Calls | 71821 | 35821 | 28675 | 4764 | 200 | 1434 | 71799 | | Error Rate | 1.625% | 1.600% | 14.385% | 7.725% | 1.000% | 1.743% | 1.625% | ![](https://i.imgur.com/vq3dnh2.png) ## Active Threads ![](https://i.imgur.com/Pn5UrYh.png) ## CPU And Memory Usage | | CPU Usage % | Memory Usage % | |-----|-------------|----------------| | MAX | 99.82 | 58.851 | | AVG | 31.055 | 52.376 | | MIN | 12.266 | 50.003 | ![](https://i.imgur.com/CJBb2W5.png) ## Average System Throughput The average system throughput can be found with the equation $X = (N/R) - Z$, where $X$ is the average throughput, $N$ is the number of concurrent users, $R$ is the response time and $Z$ is the think time of the users. $200/38.82 - 3+1+5+4.5 = 12.65$. ## System Behaviour Under Load As can be seen from our response times and resource usage, once we reached our 200 user limit, the system preformed fairly stable under this load through the run time. There are some points however, where we do see a peak in response times around 19 minutes into the test. While the response time was still fairly short at ~300ms, this was a >300% over the regular response time. Under greater loads, this could present an issue if such a drop in performance was seen. Also of note is the 1800ms peak at the start, corresponding with a near 100% load on the CPU, due to this occurring at the very start of the test, this may be due to the starting of the system. One item of note is the usage of CPU during this test. At multiple points during the test usage was above 90%, and was over 70% quite often. With more user load, it is apparent that the CPU will bottleneck the system, presenting a performance issue in the future. # Stress Test ## Setup Using the existing test plan, the thread count was increased to 500 users with a timeout of 20s on each user. Like the load test, the same setup was used. A distributed setup was possible due to issues with compatibilities between Windows and Linux when running jmeter. As the system running the jmeter test plan against the application was multi-core, we believe the this provides sufficient results with the alternative using low power ARM devices which does not reflect the performance of most desktop systems or running virtual machines to make two systems. The following changes were made from the load test plan to execute the stress test: - Change the number of threads (users) from 200 to 500. - Change the ramp-up period from 1s to 2s. - Change the loop count from infinite (no limit) to 1, - Specify the thread lifetime to 20s from 3600s. The following commands were used to run the jmeter test. This was used to run the test plan. ```bash ./jmeter -n -t STRESS.jmx -l STRESS.jtl ``` The next was used to start ServerAgent on the system hosting the NeptuneBank application. ``` startAgent.bat ``` While the issue with Windows/Linux jmeter prevented remote distributed testing, the following was the configuration used if this issue was not present. The following was added to `user.properties` on all systems running jmeter. ``` server.rmi.ssl.disable=true remote_hosts=10.0.0.10 ``` The remote "slave" was running in server mode on `10.0.0.10`. The server mode was invoked using the following command. ``` ./jmeter-server ``` The test plan would be executed using the following command. ```bash ./jmeter -n -t STRESS.jmx -l STRESS.jtl -r ``` The `-r` specified to start the test plan using the `remote_hosts` from the configuration file. The other configuration options remained the same. ![](https://i.imgur.com/HVVKuAs.png) ![](https://i.imgur.com/cOVIMMb.png) ## Stress Test Questions 1. How many concurrent users are necessary to increase the average response time to more than 2 seconds? - Due to the data resolution being too small to see the inital increase over 2 seconds, we were unable to find the response time compared to the change in response time. However, if we assume these graphs are correct, we can see that we only reach a 2 second response time after we have reached 500 users, and then for a few seconds of full load after that. 3. How many concurrent users are necessary to increase the error rate to more than 5%? - From interpolating the graphs above regarding error codes, it is inconclusive how many users it would take to increase the error rate to over 5%. There would have to be further testing with a larger number of concurrent users. 5. How many concurrent users are necessary to degrade the system’s performance enough so it becomes unusable for the majority of users? Can this even be achieved, and if so, how does the system behaves in that scenario? Does it enter into an unrecoverable state, hangs or crashes, or does it return to normal after the stress is reduced? - Again, since there wasn't a large enough number of concurrent users to conclusively decide on the amount of users necessary to make this decision. 7. Do you believe the system’s performance allows it to be released into production? Consider the hardware, network, user base vs. peak concurrent users, variations in user behavior, performed actions, think-time etc. - If the peak amount of concurrent users is under 500, it would be hard to say. The over all error rate is under 5%, but there are some request types that error out frequently, and could be problematic in a banking application.