As direct response fundraisers, we are comfortable playing the waiting game. Particularly in direct mail, we work for months to prepare the best package we can. We draft copy that inspires giving, and select imagery that pulls at the heartstrings. We evaluate ask amounts and audience selects. We produce the package, drop it in the mail … and then we wait — often for three months or more to see if it was everything we hoped it would be.
Did our effort drive the response we hoped? Did it bring in more high-dollar gifts? Did we improve our ROI? How’s the net revenue look?
That’s a lot of time — and effort — to find something that works and becomes our control package. Once we find that control, it can be scary to make any additional changes that may degrade performance.
And so, to minimize risk, we test new ideas (in small quantities) against the control in an attempt to improve performance. This makes sense. However, this practice can rapidly turn into test obsession, in our efforts to ensure we don’t break anything as a package evolves. It’s a practice that can easily snowball, wasting time and resources on things that probably never really needed to be tested and will never “move the needle.”
How can you know a test is worth the investment? You can start by asking yourself three questions:
This may seem like an easy and obvious one. But the response to this question can be extremely telling. The #1 reason you should pursue a test is: Because you believe the strategic or tactical change can “move the needle” and improve performance.
Investment should be placed in testing opportunities that can be needle-movers. Full package tests, new offers or gift arrays, or significant changes to format, cadence, message, and creative can all be needle-movers.
Changing the word ‘good’ to ‘great’ in copy? Not a needle-mover. Changing a blue line to red line in your design? Not a needle-mover.
These types of changes? Just make them. They are low risk, and often result in neutral performance between a test and control package.
But if in doubt, and the change (and therefore risk) is acceptably small, it may be worth considering something called a back test. A back test is set up very similar to a typical test except the test package is sent to the majority of the audience with the control being mailed only to a small panel. This allows you to still secure informative data points while moving ahead more quickly, in the unlikely event these changes may have more impact than you think.
That said, there is one inversion of the “move the needle” rule. That is when testing the impact of an uncontrollable change, like a logo or larger brand change that an organization may implement.
In these cases, you’re not testing in the hopes of seeing a change in performance, but rather in the hopes that you don’t. Unfortunately, in a rebrand there is no going back. So, in this instance, it is worth testing the rollout of the brand to understand if it negatively impacts performance in any way. Gathering these data points can then be used to inform future decisions, rollout plans, and budget projections.
All good tests need a hypothesis. This relates back to the belief that the test will improve performance. Some examples:
Identifying your hypothesis in advance of testing can help ensure that the test ladders back to a program or campaign objective and has a direct correlation with a program metric. It also identifies the metrics against which you will determine success (or failure).
If your test does not have a hypothesis that can directly impact donor behavior, it’s probably not worth testing.
This is an important one.
Obviously, testing a package you aren’t willing — or can’t afford — to mail in the future is wasted effort.
Typically, testing is executed in a small subset of an audience to reduce risk. This almost always results in an increased cost per thousand (CPM) depending on the nature of the test. So, it’s important to get what’s called rollout costs. Rollout costs are the estimated costs for mailing the test package to the full audience if it were to win. Then, compare those costs to the costs of the control.
If the cost is less — great! If it wins, you have a new control.
If the cost is neutral, then ask yourself did the test results meet the objective? If yes, then great again! (Of course, if the test was to reduce costs while maintaining or elevating performance, it’s back to the drawing board.)
If the cost is more, then ask yourself if the boost in performance was enough to offset the additional cost and still generate the desired outcome.
If you can identify why you are testing, and you have created a hypothesis of how that test is going to better your program, and you have a clear plan for rolling it out to a larger audience if it wins, then you’ve got yourself a worthy test.
Here’s hoping it’s a winner!