How to Choose What to Test

  • by Ondrej Pialek
  • Thursday, November 30, 2017

Being able to test nearly everything isn’t a license to do so. As we’ve said before, you can’t just randomly start building and throwing variations at your customers. Just like laying a foundation for a home, figuring out what to test is the first and most important step.

Let’s keep with the political candidate from the previous topic whose visitors weren’t signing up for his newsletter and look at how he might choose what to test. Some solutions to increasing signups might be:

 

  • A better CTA
  • More relevant images
  • More copy above the fold
  • A faster website

 

Knowing a few examples of what to test is helpful, but how do you decide where to start? Two things to be aware of when considering a solution are:

 

  • Your sales funnel - as more customers are pushed down the funnel towards conversions, some will inevitably lose interest. Understanding where visitors are jumping ship is prime testing ground, and holds potential for big gains. These “bottlenecks” are places that only a small % of visitors get through. It’s like having a hose connected to a spout when your goal is to get water out the other end to nourish your lovely garden: if the spout is sending water but nothing is coming out on the other side, there must be kink in the middle. Analysing visitor’s behaviour is the key to determining where to test.
  • Success metrics - success metrics are ways of measuring the results of the test. Always define your success metrics first and design the test around them. If you want more sign-ups, test the elements most likely to achieve that goal (such as the CTA). If you want more sales, then do the same. More on these helpful stats later!

 

All tests need to be subject to the same variables

A/B tests need to be subject to the same variables in order to avoid issues stemming from seasonal fluctuations or random events outside of your control. Just as if you were reproducing a science experiment, the variations need to be compared under similar conditions and in a single test. In order to directly compare the performance of variations against each other, you need to test at the same time. Testing long enough and on sufficient traffic will ensure that fluctuations won’t skew the results.

 

Testing is an iterative process

Taking your time is crucial. Tests require an adequate run to really determine what is and is not working. After a sufficient amount of traffic has rolled in (figuring out just how much is needed is a skill itself), the data becomes clearer and less susceptible to noise (GA experiments will even stop the test once they’ve reached certainty!).

A/B testing requires patience and repetition. If you don’t obtain significant results the first few times, keep testing. Focus on creating a large number of smaller tests and refining your testing each time with the things you’ve learned. Did your hypothesis hold up or does it need tweaking? What did the variations that performed better have in common?

The best thing about A/B testing? It always produces a result. You can run a test for days backed with a great hypothesis and backed by concrete data, and it could still return next to nothing in terms of actionable data. However, that lack of a result will teach you what did not work, and give you more insight into your customer’s behaviour.

This information can now be used to change your assumptions and come up with a new hypothesis, redefine your problem statement, and look for new solutions in different areas. You can always be learning, refining, and optimizing. Don’t fret about it!


Next up: A/B Testing Tips Summarized

Installing uSplit
  • #usplit
  • #a/b testing
  • #umbraco
Enterprise-level support for uSplit
  • #usplit
  • #a/b testing
  • #umbraco
Visual Web Optimizer
  • #usplit
  • #a/b testing
  • #umbraco

Do you need help with A/B testing in Umbraco?

Contact Us