Most awards given to marketers are silly…especially when they are based on anything but positive results.
Be skeptical when a marketer gets an award for an ad that looks pretty or because it contains clever copy…it’s far more important to find out if the ad worked (by some significant metric, whether related to response, sales and/or profit).
There was an award regularly given out years ago—not silly but ironic—for direct mail packages that remained a “control” (a winning package) for at least 5 years.
Longevity with a successful promotion can be a blessing…but it also might be a curse.
A marketer’s long-standing control might be a big money maker but it’s also about lost opportunities.
Was it a control for 5 years because it was so invincible or because the marketer just wasn’t testing enough?
Hopefully you are convinced from my previous ramblings about testing that you are now fully committed to testing more aggressively…and testing only things that are truly worthwhile (i.e., things that can give you significant lifts).
When I was working at my beloved company, Boardroom (for over 30 years), we had legendary creative brainstorming meetings where anything was considered a valid idea…but we had categories.
The whiteboard we used to record the ideas had three columns:
- Potential breakthrough ideas (those that could conceivably achieve at least a 30% increase in response)
- Tweaks (ideas that at best would probably achieve a 10% to 30% increase in response rate)
- “The Parking Lot” (good ideas for later, not now…and I recently wrote about this phenomenon in “Another year of too many ideas.”)
Implied within all three categories was the golden rule of “single variable testing,” a rule we were always aware of, and a rule we never wanted to break.
Over the years, we learned some new rules including when it’s OK to violate this golden rule with caution…and impunity.
But it wasn’t easy to shed those preconceived notions.
Like so many of you, I was taught to never test two things at once…that is, to isolate “one variable” per test panel in order to prove whether that one change gave us a lift or not.
I think the rule still applies for the most part…especially when the one variable is price…and many other things as well.
Even with some of the biggest tests I have ever done, where we hired a new copywriter with a completely different copy approach, we often kept the basic offer consistent (price, premiums etc.) so we could accurately project the lift of the new package…or how far off it was from winning.
However, with this approach, what began as a potential breakthrough often fizzled to only an acceptable tweak.
Then we got a little “sloppy” (of sorts) …with good reason…since the best copywriters knew that beating a strong incumbent promotion might involve re-structuring the offer (i.e., changing multiple elements in the control offer) to go along with their new copy platform.
And “blowing up” the previous promotion by testing anything and everything in the new promotion was not only valid, it often became a requirement.
Breakthroughs don’t happen in whispers.
It’s possible that many of you have always tested like this…with little regard for single variable testing as the gold standard…but for me, it needed to be a learned skill.
We decided that radically different tests were neither single variable or multi-variable…but rather radically new approaches necessary for radical breakthroughs.
Well…that’s how we justified it. And I’m glad we did.
World renowned copywriter Jim Rutz was the king of this “go big or go home” approach.
He regularly gave his clients (including us) a lift of 100% or more…but sometimes his high risk/high reward copy approach seemed to fall on deaf ears.
You can check out his stylings here. And check out the special offer in the P.S. below.
The idea of abandoning the golden rule of single variable testing under certain conditions drove many direct marketing purists in my company crazy…but as long as we knew the game we were playing, we were comfortable with it.
Note: To keep the purists happy, when we were simply tweaking or editing a current control, we always kept single variable testing discipline in place.
But when we were going for the gold, single variable testing took a back seat.
The thinking was that to beat something that had been around for a while demanded breakthrough thinking.
When we got into a situation where we had a control that was unbeatable for many years, I tended to get depressed…and of course I had no interest in winning any awards.
I just wanted a new winner.
I live by the credo that the control is your enemy; and the day a promotion becomes a new control, is the day to begin the process of beating it.
I knew that if we needed huge leaps to get a winner, to try to do that one variable or element at a time, while hiring an expensive and scarce resource (i.e., a new “A list copywriter”), was a daunting task.
Choosing the Jim Rutz’s of the world for an assignment over someone who wanted to play it safe was always a good idea. And we had to use them the best way possible.
There would be time later to tweak with single variable testing in-between the blockbusters.
Fast forward to today…where we can call new controls every 15 minutes if we want to since we can test faster than ever with actionable results.
When I learned the rule of thumb of single variable testing in the 1980’s, I was living in a world of direct mail only.
In today’s world of testing online (often in real time), there is a temptation to keep blowing things up every 15 minutes (because we can).
Word of caution no matter how much blowing up you are doing: You must have discipline regarding statistical significance…a topic I’ve written about before and one that is also near and dear to my heart.
Where all of this got interesting during a recent Titans Mastermind discussion:
I made the assertion that if you have big enough universes of names to test online, it might be wise to think even more about the discipline of single variable tests, doing them more…and faster.
The temptation to test multiple variables (more and faster) –and then get results that you think are statistically significant (but are not) –is the danger I presented.
The discussion then went from interesting to a little controversial when we were critiquing a sales page and every item we wanted to test by itself seemed too small for a big lift.
That’s when I realized once again that I needed to be a little more open minded and recalled “radically new approaches” trumping single variable tests when the situation calls for it.
All single variable testing is not created equal.
And being old school is not always a good thing either.
We ended up with some consensus in a room of very talented direct response marketers that big lifts on a page like the one we were looking at would demand big risks and therefore we might need to abandon single variable tests.
And yet, there were some elements that we also agreed that should be isolated in their own test panel.
All of this has caused me some sleepless nights over the past few years.
The notion that a golden rule of testing in direct response is just too cumbersome, and abandoning it across the board might make sense, began to shatter my hopes and dreams of being a responsible direct marketer.
OK…it’s not that dramatic…but it did cause some tossing and turning. 🙂
Realizing I had been in the same place decades earlier and survived (and thrived with big lifts in response) — being aggressive without being reckless–enabled me to sleep easier.
Also, recalling the Picasso quote I have shared with you numerous times in the past is always reassuring:
“Learn the rules like a pro so you can break them like an artist.”
I chalked up the Titans Mastermind discussion (and the conclusions) as “advanced direct marketing” rather than “sloppy direct marketing.”
I am now obsessed with doing this kind of analysis on all sorts of promotions in all media with as many marketers I can interact with.
“To single variable test or not to single variable test–without being sloppy–that is the question.”
And of course, the answer is exactly what you would expect:
I will leave you with this:
Test intelligently…test aggressively…and no matter what, look for the biggest rewards, not awards.
P.S. After last week’s post about Denny Hatch and “stealing smart,” I realized I’ve got some treasured archives of my own…and I want to offer you two of them at one discount price if you would like to take advantage of this special offer today.
The aforementioned boom or bust copywriter, Jim Rutz…and the copywriter who could paint pictures with his words, Bill Jayme…are two of the writers I went to when I wanted to violate the rule of single variable testing and have them create a masterpiece (i.e., a new blockbuster control).
I have all of their work, assembled and indexed, each on a deluxe USB thumb drive sent in a special plastic case, and they are both available to send to you today at one low price.
(Along with free shipping in the U.S. and half price expedited shipping outside the U.S.)
Read This or Die: The Lost Files of Jim Rutz and The Bill Jayme Collection, each one priced $295, can both be bought today at $395 (with free or discounted shipping depending on where in the world you live).
Click on each link above to read about these treasures—over 400 different promotions from two of the most legendary copywriters who have ever lived.
Almost all of the packages were winners but even those which were not, are still worth studying.
Anytime these two put pen to paper was an event. 🙂
Included on the USB’s are some interviews and presentations…Jim Rutz and Bill Jayme “live”… in addition to the over 400 promotions.
Don’t order on the order pages however…just email me at firstname.lastname@example.org and I’ll give you the special ordering details since there is no link for this offer.
This is a special offer for my online family only.