TrueSense Blog

How Well Do You Devise and Assess Your Tests?

Written by Kelly Pullin, Strategy Director | Mar 8, 2022 5:00:00 AM

 

As multichannel marketers, we understand the need to Always Be Testing — tomorrow’s results are only as good as today’s strategies.

But when it comes to test design and analysis, we don’t always apply the rigorous thinking that we should. Chances are, you’ve looked back at more than one test and wondered why the execution didn’t line up with goal — something that can easily occur when you move at the fast pace of direct response fundraising.

Furthermore, choosing when not to test can be as important as deciding what to test. If your organizational priorities are at odds with testing, then hold off and wait for the right time.

Below are 6 tips to help you design high-quality tests and analyze them well.

  1. Assess the impact. Is the test a potential game-changer? Will its results boost conversion, decrease expense, or improve performance to a significant degree? (Will rewriting the third paragraph realistically improve response?) If the answers to questions like these are “not likely,” then revise it or don’t test it.
  2. Define a goal. Nail down a clearly stated goal and determine how you’ll measure success. This sounds like a simple step, but it requires some thinking. I’ve seen many tests where it was unclear whether the goal was measuring engagement, gifts, revenue, or something else entirely. Make sure your goal is achievable with the test you’re designing, and understand by what metrics it will be measured.
  3. Review the cost. Will the thing you’re testing, if successful, complicate your production timelines? Add labor hours? Will the lift in performance be enough to offset this? Consider also that complexity of execution can increase the likelihood of errors — which will certainly have a cost. Factor this in before you proceed.
  4. Do the math. When assessing any test, consider statistical significance. This is the process of using math to quantify whether a result is due to chance or other factors. Does the size of your test panel require a 25% lift in response to be statistically significant? If that’s not likely to happen, then increase the panel size. You’ll be able to make informed decisions if your panel size and results are statistically reliable. Work with your analyst to run a stat test on your test design and on your results, and plan what you’ll do next if the test is successful — retest, or roll out.
  5. Think about what could distort your test. Consider other factors that may influence your results: time of year, other channels that might influence the test recipients, environmental influences such as current events, and other outside factors. Perhaps the test should be conducted over a period with a static audience, or it should be repeated in multiple campaigns. If you think you might question your test result, then revise your design.
  6. DON’T set it and forget it. Ensure your test is executed as you’d planned. Stay involved and monitor any danger points in the process. Did the test deploy on the right date? Was the audience panel selected randomly? Was the creative treatment handled as intended? Then create reminders to ensure that your results are analyzed promptly.

 

Taking the time to prioritize these items will pay off in the long run. Not ready to implement a new test today? No problem. Bookmark this list for easy reference. Then use these tips as a checklist before your next assessment. We’d love to hear about your results!