How to run effective tests with Sitecore

Web ApplicationsWeb Applications
Web Applications | 05.05.17
Written by Ali Graham

Sitecore enables marketers to create highly targeted and personalised communications to any site visitors, however the efficacy and rationale behind these personalisations and content updates is often not tested or validated.

Testing should be at the core of any digital strategy, whether that's PPC campaigns, SEO, or your website itself, you should be testing as much as possible to ensure that changes are indeed adding value or improving conversion, meeting goals or improving engagement scores.

Creating Effective Tests

First order of business when creating an effective test, is considering whether you are, actually creating an ‘effective’ test. An effective test should have a solid plan and hypothesis which takes into account:

  1. Why people aren't converting or performing desired actions on the site or page
  2. How this could be fixed

A common pitfall when conducting A/B or multivariate tests is focusing too heavily on the “fix it” step, with many people rushing in to test things like the colour of the call to action, or positioning and then testing to ‘see what happens’.

While potentially a valid tactic it likely doesn't take into account the core issue/reason why people aren't converting, as the chances are that people aren't failing to convert because the button is blue rather than red.

 

Testing is not a new concept that has risen out of the prevalence of modern web technologies and tools, it’s been around and has been used effectively for decades.

 

“Any questions can be answered cheaply, quickly and finally by a test campaign. And that's the way to answer them - not by arguments around a table” - Claude Hopkins, Scientific Advertising 1923.

 

Whether conducted now, or in 1923, successful testing should be rigorously planned taking into account what is already known, why a proposed test may make a difference and what that difference, or a successful test would look like.


A successful test plan can be broken down into the following steps:

 

What you know

Before creating any test hypotheses or ideas to fix any potential issues on the page/site, first you must collate and understand what is already known about the case you are looking to optimise.

This information can be collected and analysed from web analytics, user session recordings, focus groups, heatmapping or using sitecore tools such as Experience Analytics, or the Sitecore Path Analyser.

 

Sitecore Path Analyser

 

An example of “what you know” using this data to build out the first stage of a test plan may be for example:


“We have found that 97% of visitors to our product page don’t add the item to cart, from web analytics data and session recordings we found that mobile site users are not utilising the call to action to find out more about the product”

 

What you think will affect change

From identifying the issue in the “what you know” stage a testing hypothesis can be built outlining what site or page feature can be added/changed and tested and for what audience may glean positive results.

 


Building on our “what you know statement” an example of what we believe may affect change may be:


“We believe that improving the messaging, explaining the value of the product in clearer terms through the use of a bulleted list rather than continuous prose for mobile site visitors will improve cart add actions.”

 

Success/Failure Criteria


In order to understand how a test has performed, whether it has had a positive impact or not, strict success/failure criteria must be set alongside the period the test will be run for.


In the example we’ve been using, the success/failure criteria would likely be “increase in product cart adds” however more granular criteria or micro conversions could be used depending on case.  For example, experience score or goals triggered in Sitecore.


In terms of how long a test should run for, this is typically calculated inside of Sitecore or your chosen A/B/Multivariate testing tool based on:

 

  • How much traffic the test is likely to receive based on historical data
  • What percentage of visitors you wish to show the test to
  • Number of test variants
  • At what point these factors will converge to yield a statistically significant outcome


In addition to the above, it’s important to take into account the actual time period in which you run your tests.  For example, taking into account if there are any seasonal trends that could affect outcomes.


Additionally, avoid stopping tests before it has covered a complete business cycle.  This is 1 week in most cases and covers the different types of visitors and buying behaviours present through varying times of day and through weekdays/weekends.  For example, consumer goods sites potentially have higher conversion rates and transaction values over weekends.

 
In summary, all good tests start with a well laid out test plan that takes into account three core factors:
  • What you know
  • What you think will affect change
  • Success & Failure Criteria

When first starting with AB and multivariate testing we recommend producing test plans around “minimal viable tests” - focusing on easy wins that will have the perceived maximum impact and where it will be easy to understand and recognise whether it has succeeded or not.

 

From here testing should be iterated and evolve into more nuanced interactions and optimisation strategies.




 

 

Have a burning question, a problem that needs solving or you'd just like to say hello...

Get In Touch