« Providing Professional Email Services for Print Clients | Main | Redesign & Consolidation of Brand Websites »

Marketing & Testing for a Better Response Rate

May 22, 2013
Posted by John Fager at 4:52 PM

This information is designed to help printers, marketing agencies and internal marketing teams understand better strategies for testing direct marketing.

The fact is that direct marketing works. It can turn companies into competitive monsters but it’s important to have realistic expectations. Trying something once and giving up has never worked well in any venture. It takes time and persistence to yield positive results!

The perspective of the client about why they are doing their printing and marketing with your firm is important. As you develop your relationship, make sure your client trusts in a long term strategy and your knowledge of marketing. Don't hook them on a one off trial.

Sell Marketing as a Strategy

More than a few clients have been asking about response rates and results from a campaign. Some have done incredibly well on a very small campaign while others have had extremely low response. This leads to a couple of key questions:

  • What was the cause of good or poor performance?
  • Is the difference between two different response rates significant?

When we talk to clients about marketing strategy, we always make it clear that we are not here to champion our personal preferences. We believe in statistics.

We put clients on a marketing program that will:

  • Isolate different factors (list, design, offer, and web content)
  • Measure the success of each factor
  • Employ the most successful elements tested while continuing to test for even better response rates

There will be two key components to determining success:

  • Measuring each component individually
  • Determining whether the differences in response rates are statistically significant

There are 3 main components that we want to observe.

Testing List Testing the Mail Piece Testing Web Content

Each of these elements of our strategy has the ability to make or tank the campaign. The flip side is that any one of the elements might not have a significant effect on the response rate. This depends on the audience, the product, and the current market.

The List

The biggest and most crucial element is the list. If we are not targeting prospects that are interested in our product, then no amount of design, copywriting or great offers will help us build sales.

It is critical to use data analysis of existing customers with segmentation on how “good” or not that particular customer is. Does the group of customers that orders once differ in demographics, behavior or self-reported information from those that are long term repeat customers?

Based on the existing data for customers and orders, we have access to a tremendous amount of insight on the “ideal” type of prospect that we are looking for. It’s important to use that on our list criteria.

Additionally, it is important to look at the list provider.

  • Who compiled the data?
  • How high is the quality of the members?
  • Are the claims about the data accurate?
  • Are there any other providers?

Testing the Distribution Source

As we test the list source, it is important to ensure that we don’t skew the results by changing anything else on the marketing. The marketing piece and web experience should be the same for all prospects. If we make other changes, we are reducing the population of each factor and it becomes harder to determine if there are significant differences in the response rate.

Here is an example workflow for testing a list:

Testing the List

Notice that the campaigns that we measure response under are different, but the creative for the mail piece and the web are the same. This helps reduce production costs and ensures that we are just testing the list.

Testing the Mail (or Email) Piece

Whether distributing via a piece of direct mail or email, we can try different design as well as copy and offer strategies on the card.

There is a “mail moment” on every piece of direct marketing. This is the two seconds of attention that the prospect gives to the piece. It either goes in the trash or the prospect decides to review it more carefully.

We cannot know whether someone looked at the piece in depth, but we can provide a web address that is tracked to measure how many people visit the web to get more information. A PURL (personalized URL) or an email program with good tracking is even better in that we will know who specifically off of our contact list decided to get more information.

The recorded web visit or link click is excellent, but it doesn’t help us determine whether the list or the mail piece were the significant factors as to why we received that response. For this reason, it is critical to measure both the list and the piece independently with proper tracking.

The following example shows how a workflow could handle testing just the direct marketing piece:

Testing the Mail Piece

Notice that the list and web experience remain unchanged so that we can attribute the differences in response rate entirely to the design, offer and copy of the marketing piece. We should be careful to only alter the aspects of the marketing design that we want to test.

  • If testing the offer or copy, leave the design the same and change the offer content.
  • If testing the actual design, work with changes that might be the main colors on the card or the primary photos.
  • Test different call-to-action designs and strategies.

Testing the Web Content

It’s important to remember that with direct marketing, we first have to get a web hit response before we even begin to build our audience for measuring the effect that the web pages are having on response. This smaller audience means that it is harder to measure whether one response rate is really significantly better than another response rate.

Because of the expense of developing web content, we suggest that deploying separate web content to test should be done only after we are certain that the list and marketing piece are effective at driving traffic to the website. After all, the investment is lost if no one sees either web piece.

In situations where we are seeing lots of web hits, but we are not seeing link clicks, survey submissions or purchases, it is reasonable to examine our web content. By creating alternative or modified content and measuring the change in behavior, we can increase our conversion rates and increase ROI.

Here is a workflow for ensuring that we are accurately measuring modifications in web content:

Testing Web Content

As with the tests on design, we can change the main colors, the style of the design itself or the offer and call-to-action portions of the web page. It’s important to isolate these various changes to the degree possible so that we can effectively understand what is working and what is not.

Is the Change in Response Rate Significant?

All too often, we see clients attempting to test or split their creative using variable data printing (VDP) to see which strategy works best with a small population. It’s fine to run tests, but if we are going to invest in design and development, it is important to be aware of these two questions:

  • What is the size of the population of the test?
  • What will constitute “success” where one strategy has been proven to be more effective over another?

These questions really need to be addressed on the front end of a campaign so that it’s clear to the client whether it is worth the investment of running a test in the first place.

A Refresher on Statistics 101

Everybody remembers the grade school days when we were taught about probability. The odds on getting heads on a coin toss are 50/50. The secret is that each coin toss is a new trial, so it’s rare that 100 tosses will yield exactly 50 heads.

Coin Toss Test
Trials 100 5,000
Expected % Heads 50% 50%
Confidence Level 95% 95%
Observed 40.2% - 59.8% 48.6% - 51.4%
Confidence Interval 9.8% 1.4%

This means that if we flip a coin 100 times and get 41 heads during the first trial and 59 heads during the next trial, this fluctuation is due to random chance. The odds didn’t change and the coin didn’t change. It’s just a normal part of how things fluctuate. The second round of 5,000 tests narrows the expected of likely outcomes significantly. As the number of trials increase, we get closer to knowing that what we are seeing is due to true probability and not flukiness.

As you can see, it’s a pretty big spread. Let’s see what happens if we apply this to a marketing campaign:

List Source TestSizeResp RateResult
List Source A 1,000 2.0% NOT SIGNIFICANT
List Source B 1,000 3.0%

A larger population allows us to be confident that smaller changes in response rates are significant:

List Source TestSizeResp RateResult
List Source A 5,000 2.0% SIGNIFICANT
List Source B 5,000 2.5%

It would seem that 3% is far better than 2%, but the difference is not statistically significant with only 1,000 prospects in each test group. The odds are too high that the fluctuation is due to random chance. To be sure that the test is working, we need a larger population on each list or we need to repeat the test multiple times.

Takeaways

  • Educate clients about marketing via strategy and statistics
  • Avoid the single sale that says try it and give up if it doesn’t work
  • Focus on using statistics to increase response
  • Design clear tests to continuously improve response

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00e55189186f88330191026c73e0970c

Listed below are links to weblogs that reference Marketing & Testing for a Better Response Rate:

Comments

The comments to this entry are closed.

Free PURL Trial Marketing Software

About Us

JFM Concepts is a full service cross media marketing technology firm featuring the VDP Web® PURL and cross media marketing platform. JFM specializes in creating variable data cross media marketing technologies with Personalized URLs for commercial printers, marketing departments and agencies of all sizes.

Cross Media 101 Book

Cross Media Marketing 101, The concise guide to surviving in the C-Suite is the perfect guide for executives and managers looking to be sure that they have considered the major challenges facing them as traditional marketing methods are eclipsed and even supplanted by new channels. Taking a completely agnostic approach to the marketing mix, the author offers practical insight into the changing marketing world and how marketing leaders can beat the odds. Order Now

QR Code Example

Newsroom