You wouldn’t set out hiking across the vast frontier in a pair of hiking boots you never tried on before, would you? Of course not. You’d make sure those boots were the perfect fit before you set out on your trek.
The same goes for testing your offline or online direct-response fundraising appeals. You count on the results and the costs required to obtain those results. This holds true whether those are new and reactivated lapsed donors, advocacy actions from your constituents, or — and this is most important for many — the revenue active donors provide to fuel your mission.
You certainly would not abandon these important campaigns for new concepts without first “trying them on” for fit and feel.
In the largest sense, you should test to protect or, if possible, to improve the results from your current campaigns (which are known as “controls.”) But there are other, more detailed reasons to test:
Specifically, you can test pretty much anything. Typical areas of testing include:
Often, nonprofits only think of testing creative, but large gains can be found in performance by testing other variables! So, a balance of types of tests is important across your fiscal year.
First of all, minimize risk. Remember, any new test holds the possibility of failure. So select your testing opportunities carefully. Develop a test statement and a hypothesis ahead of time to identify how the test results will be measured.
As an example, if you’re testing the inclusion of labels in a direct mail package, the testing statement would be: Will adding labels to the package increase the response rate? Or suppress the average gift? Or generate more revenue to offset the increased package cost?
Then, agree on the metrics you’ll use to evaluate the test, and compare to that of the control. Response rate? Average gift? Net revenue?
Let’s be clear: Testing shouldn’t ever just be done for the sake of testing. All tests need careful structure. Here’s how to do that:
In determining whether a test is a winner or a loser, refer to your original testing statement and hypothesis, as well as the metrics you were planning to reference to determine the outcome. This will ensure you’re evaluating the results based on the reason(s) the test was performed.
Before assuming anything about how your test did, you must be comfortable that you can count on your performance data. Statistical validity is very important in interpreting test results:
Take a look at the simple campaign-level results below. The tendency would be to declare Panel 3 a “loser” in terms of average gift, because it is $8 lower than Panel 2, and $3 lower than Panel 1.
Panel | Response Rate | Average Gift |
Panel 1 | 0.84% | $25 |
Panel 2 | 0.61% | $30 |
Panel 3 | 1.01% | $22 |
However, the response rate of Panel 3 is significantly higher, so it is important to look at a gift histogram (below) to see if Panel 3 truly is producing fewer high-dollar gifts — or if all gifts are increasing but, based on composition of where the increases are, it’s pulling the average down.
We found that the latter scenario was the case. Panel 3 resulted in as many (or more) donors at every level except the $100+ level, where Panel 3 had only two fewer gifts than Panel 2 — statistically insignificant.
In the end, the revenue you can expect from your testing programs will be influenced by average gift, response rate (the actual number of gifts generated), and cost. NPR helps you to determine if the response rate increased enough; if the costs increased too much; or if the average gift declined too much for the test to be a winner.
The NPR is a good equalizer of all of this: The test panel with the highest NPR can usually, and confidently, be declared the “winner.”
To minimize risk, it’s always good to get one more read on results by retesting. This is commonly done in acquisition with both list and package tests.
If your test results are significantly better, or if your test was originally performed just to ensure that simple tactics like an image or story change wouldn’t negatively impact results, you may feel comfortable “rolling out” your test as the control, and changing the original control to the test.
And remember ... even if the test is deemed a “loser,” it’s not bad news! Sometimes, learning what NOT to do is even more important than learning what TO do.
Fundraisers have long sought to find and cultivate those with heroic human hearts who want to turn their compassion into action. Along the way, fundraisers have also learned lessons, honed skills, and crafted techniques that — in the end — are framed by basic human behaviors and motivations.
Get your copy of the complete compendium of tried-and-true tactics for conquering the fundraising frontier, written by TrueSense Marketing’s fundraising experts.