Why your testing plan is your be-all or end-all for useful user research and how a strong hypothesis can help.
Imagine you're a few months in after launching your digital product. Stats are looking good, initial marketing has gone down well and downloads are higher than first predicted (we can dream, right?). Now that it's out there in the wild, your team can properly start analysing and begin the pruning process; which may involve some kind of variant test. Great!
A small aside – if you're not familiar with variant tests, they go something like this:
- You write a hypothesis on what you want to test;
- You create variations, which are split amongst a defined percentage of users; and
- Once your test has gone live and enough data gathered, it should show you which variation was successful, had failed, or displayed similar outcomes.
So – what are you going to test?
There will probably be a lot of input from your team when this question rears its head; and if you're not preparing hypotheses properly, then your tests can begin to fall apart before they've even begun. Let's take a look at the following examples:
- Jerry wants to try a smaller font size on the terms and conditions because he thinks they're too big;
- Summer wants to test a different image on the launch screen as she thinks it'll be more engaging; and
- Beth thinks the colour of the navigation is too bright and wants to test a more subtle colour scheme.
These examples could be good grounds for a variant test; but at the moment they're just conjecture, and we're missing a little more information which would turn these statements in to far more useful hypotheses.
So what additional information is required? I've put together some questions which should be answered before you consider it a candidate for testing.
- What are you testing? This is your initial statement. It should be short and to the point. Try not to make design decisions here.
- What are you predicting? Your key goal for creating this test.
- How are you measuring this test? This could be any change in a particular conversion metric.
- Should you be aware of anything in the wider context? This could be anything outside of your tests scope which may be affected either positively or negatively.
- Any suggestions for designing the test? This final question provides a platform for other team members to give design suggestions.
By answering these questions you should have a hypothesis which is well-rounded, objective, and democratic within your team. Remember, the main aim of this exercise is to try and make sure everyone knows what to expect during your all-important testing phase. As a designer you have a clear understanding as to the "why" of the test, and can start making design decisions; as a developer you can see what needs to be assessed, changed and measured; and as a client you can make a more informed decision as to the benefit of doing a particular test and prioritise accordingly.
I'd love to hear any thoughts or suggestions you might have on this, so do drop me a line on Twitter.
Edit (01-05-2018) – I recently came across this article which goes in to more detail on the same topic. Definitely worth sinking your teeth in to.