« February 2013 | Main | June 2013 »

2 posts from May 2013

Marketing & Testing for a Better Response Rate

May 22, 2013
Posted by John Fager at 4:52 PM

This information is designed to help printers, marketing agencies and internal marketing teams understand better strategies for testing direct marketing.

The fact is that direct marketing works. It can turn companies into competitive monsters but it’s important to have realistic expectations. Trying something once and giving up has never worked well in any venture. It takes time and persistence to yield positive results!

The perspective of the client about why they are doing their printing and marketing with your firm is important. As you develop your relationship, make sure your client trusts in a long term strategy and your knowledge of marketing. Don't hook them on a one off trial.

Sell Marketing as a Strategy

More than a few clients have been asking about response rates and results from a campaign. Some have done incredibly well on a very small campaign while others have had extremely low response. This leads to a couple of key questions:

  • What was the cause of good or poor performance?
  • Is the difference between two different response rates significant?

When we talk to clients about marketing strategy, we always make it clear that we are not here to champion our personal preferences. We believe in statistics.

We put clients on a marketing program that will:

  • Isolate different factors (list, design, offer, and web content)
  • Measure the success of each factor
  • Employ the most successful elements tested while continuing to test for even better response rates

There will be two key components to determining success:

  • Measuring each component individually
  • Determining whether the differences in response rates are statistically significant

There are 3 main components that we want to observe.

Testing List Testing the Mail Piece Testing Web Content

Each of these elements of our strategy has the ability to make or tank the campaign. The flip side is that any one of the elements might not have a significant effect on the response rate. This depends on the audience, the product, and the current market.

The List

The biggest and most crucial element is the list. If we are not targeting prospects that are interested in our product, then no amount of design, copywriting or great offers will help us build sales.

It is critical to use data analysis of existing customers with segmentation on how “good” or not that particular customer is. Does the group of customers that orders once differ in demographics, behavior or self-reported information from those that are long term repeat customers?

Based on the existing data for customers and orders, we have access to a tremendous amount of insight on the “ideal” type of prospect that we are looking for. It’s important to use that on our list criteria.

Additionally, it is important to look at the list provider.

  • Who compiled the data?
  • How high is the quality of the members?
  • Are the claims about the data accurate?
  • Are there any other providers?

Testing the Distribution Source

As we test the list source, it is important to ensure that we don’t skew the results by changing anything else on the marketing. The marketing piece and web experience should be the same for all prospects. If we make other changes, we are reducing the population of each factor and it becomes harder to determine if there are significant differences in the response rate.

Here is an example workflow for testing a list:

Testing the List

Notice that the campaigns that we measure response under are different, but the creative for the mail piece and the web are the same. This helps reduce production costs and ensures that we are just testing the list.

Testing the Mail (or Email) Piece

Whether distributing via a piece of direct mail or email, we can try different design as well as copy and offer strategies on the card.

There is a “mail moment” on every piece of direct marketing. This is the two seconds of attention that the prospect gives to the piece. It either goes in the trash or the prospect decides to review it more carefully.

We cannot know whether someone looked at the piece in depth, but we can provide a web address that is tracked to measure how many people visit the web to get more information. A PURL (personalized URL) or an email program with good tracking is even better in that we will know who specifically off of our contact list decided to get more information.

The recorded web visit or link click is excellent, but it doesn’t help us determine whether the list or the mail piece were the significant factors as to why we received that response. For this reason, it is critical to measure both the list and the piece independently with proper tracking.

The following example shows how a workflow could handle testing just the direct marketing piece:

Testing the Mail Piece

Notice that the list and web experience remain unchanged so that we can attribute the differences in response rate entirely to the design, offer and copy of the marketing piece. We should be careful to only alter the aspects of the marketing design that we want to test.

  • If testing the offer or copy, leave the design the same and change the offer content.
  • If testing the actual design, work with changes that might be the main colors on the card or the primary photos.
  • Test different call-to-action designs and strategies.

Testing the Web Content

It’s important to remember that with direct marketing, we first have to get a web hit response before we even begin to build our audience for measuring the effect that the web pages are having on response. This smaller audience means that it is harder to measure whether one response rate is really significantly better than another response rate.

Because of the expense of developing web content, we suggest that deploying separate web content to test should be done only after we are certain that the list and marketing piece are effective at driving traffic to the website. After all, the investment is lost if no one sees either web piece.

In situations where we are seeing lots of web hits, but we are not seeing link clicks, survey submissions or purchases, it is reasonable to examine our web content. By creating alternative or modified content and measuring the change in behavior, we can increase our conversion rates and increase ROI.

Here is a workflow for ensuring that we are accurately measuring modifications in web content:

Testing Web Content

As with the tests on design, we can change the main colors, the style of the design itself or the offer and call-to-action portions of the web page. It’s important to isolate these various changes to the degree possible so that we can effectively understand what is working and what is not.

Is the Change in Response Rate Significant?

All too often, we see clients attempting to test or split their creative using variable data printing (VDP) to see which strategy works best with a small population. It’s fine to run tests, but if we are going to invest in design and development, it is important to be aware of these two questions:

  • What is the size of the population of the test?
  • What will constitute “success” where one strategy has been proven to be more effective over another?

These questions really need to be addressed on the front end of a campaign so that it’s clear to the client whether it is worth the investment of running a test in the first place.

A Refresher on Statistics 101

Everybody remembers the grade school days when we were taught about probability. The odds on getting heads on a coin toss are 50/50. The secret is that each coin toss is a new trial, so it’s rare that 100 tosses will yield exactly 50 heads.

Coin Toss Test
Trials 100 5,000
Expected % Heads 50% 50%
Confidence Level 95% 95%
Observed 40.2% - 59.8% 48.6% - 51.4%
Confidence Interval 9.8% 1.4%

This means that if we flip a coin 100 times and get 41 heads during the first trial and 59 heads during the next trial, this fluctuation is due to random chance. The odds didn’t change and the coin didn’t change. It’s just a normal part of how things fluctuate. The second round of 5,000 tests narrows the expected of likely outcomes significantly. As the number of trials increase, we get closer to knowing that what we are seeing is due to true probability and not flukiness.

As you can see, it’s a pretty big spread. Let’s see what happens if we apply this to a marketing campaign:

List Source TestSizeResp RateResult
List Source A 1,000 2.0% NOT SIGNIFICANT
List Source B 1,000 3.0%

A larger population allows us to be confident that smaller changes in response rates are significant:

List Source TestSizeResp RateResult
List Source A 5,000 2.0% SIGNIFICANT
List Source B 5,000 2.5%

It would seem that 3% is far better than 2%, but the difference is not statistically significant with only 1,000 prospects in each test group. The odds are too high that the fluctuation is due to random chance. To be sure that the test is working, we need a larger population on each list or we need to repeat the test multiple times.

Takeaways

  • Educate clients about marketing via strategy and statistics
  • Avoid the single sale that says try it and give up if it doesn’t work
  • Focus on using statistics to increase response
  • Design clear tests to continuously improve response

Providing Professional Email Services for Print Clients

May 02, 2013
Posted by John Fager at 6:51 PM

More and more printer partners are being asked to provide email communications along with their direct mail programs. For printers that are just venturing into this space, we would like to provide some key recommendations.

Many printers think that since sending an email doesn’t take up press time, it can be deployed right away. It’s true that email can go out immediately, but making sure that the content is coded correctly and that all the necessary details are in place takes time.

We recommend a checklist of information needed for an email and a standard client communication delivered in writing that explains the required deliverables and timeline.

  • Have a written timeline
    The timeline should include a trigger point that starts the clock from the point that the client has given you everything needed to create and send the email.
  • Make the client review & approve a live test
    Have a deliverable that includes a live sample email send to your client’s test list. Recommend that the client signup for and provide you with email addresses that use Outlook, Gmail, Hotmail (now Outlook.com) and Yahoo that they can access directly. This involves the client directly and makes them feel comfortable with what is going out.
  • Leave time between the test & the live send
    Make sure to express a standard of at least 2 business days between the test email to your client’s test list and the actual send. This gives plenty of time for revisions.
  • The client should participate in testing & approve the email
    Make sure that the client is responsible for reviewing and approving what was sent.
  • Include a single round of changes for accuracy
    Make sure to be clear that additional changes may have a cost. This keeps the number of changes and tests down to a minimum and ensures that the client takes their time reviewing the test and organizing their feedback.

Design for email can be tricky due to the diverse number of email readers on the market. Here are a few design gotcha’s:

  • Not all email readers display background images
    Background images are nice for layout, but Outlook won’t display them. A fallback background color can help, but generally we recommend avoiding background images in email altogether.
  • Email doesn’t look the same on every device
    Gmail, iPhones, and Yahoo all inject markup into email code that is beyond the control of the sender. As a result, addresses, phone numbers, dates, times and other terms can be turned into hyperlinks. These may be blue, green or some other color. These added colors can make your copy unreadable if you have background colors.
  • CSS code doesn’t work on all email readers
    With web code, we style in DIV tags for width, height, coloring, and other neat design stuff. In the world of email readers, much of the CSS design won’t render.

Check out this video of a presentation that discusses quoting and fulfilling HTML for both the web and email for print providers.

Don't Get Killed by HTML - PSDA Seminar

Free PURL Trial Marketing Software

About Us

JFM Concepts is a full service cross media marketing technology firm featuring the VDP Web® PURL and cross media marketing platform. JFM specializes in creating variable data cross media marketing technologies with Personalized URLs for commercial printers, marketing departments and agencies of all sizes.

Cross Media 101 Book

Cross Media Marketing 101, The concise guide to surviving in the C-Suite is the perfect guide for executives and managers looking to be sure that they have considered the major challenges facing them as traditional marketing methods are eclipsed and even supplanted by new channels. Taking a completely agnostic approach to the marketing mix, the author offers practical insight into the changing marketing world and how marketing leaders can beat the odds. Order Now

QR Code Example

Newsroom