# AdeptMind - CF Browse CET Project Performance Test Plan This document describes about the performance test plan for CF Browse CET project. A usage model is first provided to illustrate the overall peak traffic and the usage split. Then the test scope is defined with the list of to-be tested services and their projected traffic. Certain metrics such as response time percentiles and failure rate are to be defined with CF QA team. Two types of performance test and the execution plan are then described. ## Services AdeptMind provides a few services that enable the CF Browse CET Project, including: - `search`: the search service is called when a user is asking a query for shop name, product description or their combination - `auto-complete`: when the user is typing the search query in the search bar, shop names and suggested queries are provided - `products`: this service is used in 2 scenarios - when a user click into the product detail page (PDP) from the search result page or the recommendation section from another product detail page, the products detail information is provided by this service; - when a user looks into their favorate product list, the detail information for each of these products is provided by this service; - `frontpage`: this service tells the layout of the frontpage, including features shops, trending products, on sale products, etc. - `events-tracking`: this service receives beacon data generated by the user actions on the App, like open app, close app, click on shop, etc. ## Usage Model The usage model is based on certain monthly active user (MAU) target the project has. All the projected traffic volumes for each type of services are inferred based this number. As an example for the illustration, we assume the target MAU is 200,000 users. Note the inferred volume of traffic will be changed proportionally if the assumption is updated. Give 200,000 MAU, the number of users that use the service will be around 20,000. As a rule of thumb, the peak-time traffic is usually 10 times than the on-average traffic. Following are the average number of services call for a lifecycle of a user session (during the period of time starting when the user is starting to use the App until the time when he/she is inactive for at least 20 minutes). - accessing the front page once - doing 2 searches - visiting 10 product detail pages - viewing the favorite page once Given the assumption of peak time traffic and usage model in the previous section, the service level pressure is described as query per second (QPS). - `search`: for every search query, it calls the search endpoint once - 20,000 x 10 x 2 / 24 / 3600 ~ 4.6 QPS - `auto-complete`: for every search query, it calls the autocomplete endpoint after every character the user types. We assume the average length of search queries is 10 characters, so the pressure is - 20,000 x 10 x 2 x 10 / 24 / 3600 ~ 46 QPS - `products`: for every product detail page lookup and every favorate product list view, it needs to call this service - 20,000 x 10 x 11 / 24 / 3600 ~ 25 QPS - `frontpage`: every user session generates one frontpage call - 20,000 x 10 x 1 / 24 / 3600 ~ 2.3 QPS - `events-tracking`: given the usage model above, one user session triggers around 20 events - 20,000 x 10 x 20 / 24 / 3600 ~ 46 QPS ## Tests For each service, CF and AdeptMind need to define together the target response time percentile and failure rate as a part of SLA. The test is to measure if these goals are met. Two types of tests are executed: component tests and composite simulation. ### Component Tests For component stress test, each service is called independently. The test is starting with light traffic and it's gradully increase to the peak level. The test will simulate the increase of the traffic. It will simulate how fast the traffic reaches to the peak. The time to reach to the peak can vary from: 24hr, 12hr, 6hr, 3hr, 1hr, 30min. It will also simulate the stability of the service the traffic stays in the peak. The last period of peak will run for 1hr, 2hr, 6hr, 12hr, 24hr and 7days. The test report will include the response time distribution and QPS level for timestamps in the experiments. ### Composite Tests Composite simulation stress test will be more close to the real use scenarios. It will mix the call for all the services simutanuously and get the input for each service call from the response of the dependent services. The following describes the approaches to generate the input for each service: - `search` and `auto-complete`: search queries for `search` and `auto-complete` service are generated from distribution of domain based keywords including shop names, product description and their combination. - `products`: the input of the product or shop view requests are generated from the following 3 sources: + products from the search results of `search` service + products from the similar product results from the product page from previous call of `products` service + products from users' favorite product list: simulate a power-law distribution for favorite products for each user, and get this list from the distribution - `frontpage`: it is a static service call which does not require inputs The test will start the simulation from users' landing and search actions. The response of these actions for each user is put into a buffer, and the simulator will transform the response data, generate the request input and make the simulation call. The test scheduling will be similar to the Component test, varying the scaling up time and the peak last time. The test report will include the response time distribution and QPS level of every service for timestamps in the experiments.