Test, test, test. That is the mantra of many traditional direct marketers. People like me believe that if you are not testing then you are not pushing the envelope in a way that allows you to make data based marketing decisions. Basically you are marketing in the dark and that is not beneficial for any organization.
Testing is a way to determine what will make your audience response. It is a way of listening to your audience so you can utilize that knowledge to attract more people in as cost effective manner as possible. Direct response tests, where the recipient is actually purchasing something whether it be a membership or a book or a meeting registration, provide the sender with actual potential purchasing behavior. This is great information to have because the sender can actually see what a recipient is willing to actually put money down to purchase.
In contradiction to this is trying to understand your audience through market research. Market research is very valuable so please do not think I am saying it does not have its place in any strategic marketing program. Ask my colleague Kevin Whorton if I know the value of market research, and I know we will agree. One of the shortcomings of testing potential purchasing behavior through market research methods such as surveys and focus groups is that it is primarily theoretical and there can be inherent biases. Participants are not asked to literally plop down a credit card, check or cash, so their actual purchasing decision may be different than if they were to literally "feel the pain" in their pocketbook or wallet. Participants may also be biased in that they want to provide a "strategic response"—they will try to tell you what they think you want to hear, not what they would actually do. This could be due to peer pressure, personality traits, or many other things that could lead to their not being 100% accurate in their response in the research study.
I will never forget an example from one of our CAM groups at a focus group the client association sponsored a few years back. The focus group was designed to help a publisher better understand what type of magazine participants were most interested in purchasing. The facilitator put out a wide range of magazines for attendees to review, ranging from “trashy” to “intellectual” magazines. During the focus group almost all participants said they would most likely read the more intellectual publications. At face value this could be very valuable information and a good predictor of future purchasing behavior. Before attendees left, they were told they could take any magazines home with them that they wanted to read later. Almost every person took the “trashy” magazines they just said they were not interested in reading. This is a great example of the bias inherent in some research that you should be aware of.
To me both ways of getting at customer/member/purchaser preferences are valuable. As long as we understand the type of information we are receiving, and know how to combine them to get a complete picture of current and potential purchasers, then we are headed in the right direction. Do you agree?