# A/B Testing Peak Shift also uses A/B testing, another method of user research used to compare two different versions of an application, product, or service. The outcome of this method is to see which version of the application performs better with users, and why. The statistical analysis of quantitive data (such as task success rate and time taken) is then used to determine which version of the application or service performs better with users. ![](https://i.imgur.com/L7NXp6Q.jpg) ## How to Prepare **Collect your initial data** and decide which feature, journey, or page you wish to test on users. It's best to start by optimising weaker areas of your application that need improvement. A good tip is to start by looking at areas typically associated with low task conversion or high drop-off rates. **Identify the conversion goal or success metric.** When performing A/B tests, it's important to have a guage of what defines a success in your variation design. These goals could be anything from number of clicks, time spent scrolling, e-mail signup forms completed, or even just looking at task rate completion and time taken. **Generate your hypothesis.** Once you've identified your testing goal, you can begin hypothesising potential improvements to the current version of your application. Prioritise a list of your ideas, keeping an eye out for two specific metrics: expected user impact, and difficulty of design, development, and implementation. **Design your variations.** Once you've narrowed your testing down to one feature, journey, page, or component, you can begin designing variations. This could be as simple as changing the color or location of a button, swapping the order of elements on a page, or something more custom and complex such as redesigning a user journey. However, bare in mind that making smaller changes to a page or flow will allow you to pinpoint which changes did or didn't work - too many changes can leave you guessing. ## How it Works **Runing your experiment:** There are a couple of different ways you can run an A/B test on users. In both methods there is always a control group, and an experiment group. This way you can cross reference success metrics and analyse the data gathered. Researchers measure the user's interaction with each experience, and can also note any feedback or follow up questions for further context when analysing data and design rationale. The first form of A/B testing is to create an alternative version of your application or service, and to direct a portion of your users here. You can gather data on how well the design variation(s) perform as the users scan, scroll, click, and journey through the product or application. The second approach is to run usability or task performing tests on each user group using different design variants (A/B). This can sometimes gather more robust results, and is often more suitable to products with low (or sometimes no) users. However, make sure to keep both the control group and experiment group seperate from one another to avoid cross contamination of results and bias. Make sure to also have a minimum of 5-10 users in each group as this will portray a more accurate version of test results. **Analysing your results:** Once the testing is complete, it's time to take a look at the results. Measure which design variation has performed better according to your set conversion goal - analysing whether or not the change made a statistically significant difference. If you need more data, you can always run the experiment again for a longer period of time - just make sure to use different users! If your design variation made significant improvements to user task success rates and times, congratulations - your hypothesis was correct! You can use this feedback to influence your design and decision making process. If your hypothesis was incorrect and your design variations didn't show greater success rates, don't worry! Sometimes learning what doesn't work is just as useful as knowing what does. Go back to the drawing board and rehypothesize, reiterate your designs, and retest it until you're happy with the results.