Does your VOC program encourage purchase behavior?


"Research is a process where you can spend a lot of money and come up with zero. Isurus guides me quickly through the key decisions, helps me avoid the pitfalls, and makes sure I walk away with high-value implications."

-Vice President of Marketing, Enterprise Content Management System Provider

Joe Radwich

Joe Radwich
Vice President

Does your VOC program encourage purchase behavior?

Your NPS or customer satisfaction survey can increase the frequency and number of purchases a customer makes over time.

Research by Sterling Bone of Utah State’s Huntsman School of Business indicates that starting customer feedback surveys focused on what you do well increases NPS and satisfaction scores in the short-term, and can increase the frequency and number of purchases a customer makes with you over the long run.

The boost in NPS or satisfaction is expected – asking people questions like “What was the best part of your experience?” puts them in a more positive frame of mind than does asking them to remember how you’ve fallen short. And enough existing research shows that people in a more positive mood give higher ratings on surveys.

That this positive nudge influences purchase behavior months after the survey comes as a surprise. We usually think about how experiences drive the ratings on surveys, not how surveys may influence behaviors. The researchers have two hypotheses for this effect: Asking “what went well?” generates a positive feedback loop by surfacing good memories for the customer; and, that people unconsciously try to avoid cognitive dissonance – if we said something positive about a product, we don’t want to seem inconsistent by not continuing to purchase it.

The findings raises the question: Should you adjust your voice-of-the-customer or NPS program to take advantage of this phenomenon? National brands such as Subway and Jet Blue already incorporate these ideas into their customer feedback programs. There are four broad justifications for making the change.

  • Traditional research philosophies reject this type of manipulation and strive to minimize any impact the artifice of a survey has on the results. We don’t want to bias the data. The reality is that most research is biased (more on that later). If asking about positive experiences primes some customers to be happier with a product or service, then the opposite is probably also true. A survey that asks customers to focus on problems may prime them to be less happy then they actually are. More research is required to answer that conclusively, but it would be surprising if the phenomenon only works in one direction.
  • There is bias in every data set – the mistake is not recognizing its nature. If you know what it is you can use judgement to account for it as you make inferences, and strategy based on that data. If your VOC program shifts from a problem-finding focus to a positives-focus, expect an increase in NPS and satisfaction at first due to the positive bias. Scores will stabilize as the new protocol is repeated over time. The easy mistake to make would be to forget that the one-time increase in the metrics is due primarily to a change in the survey instrument. But if you account for the one-time jump in ratings and reset your baselines, having a positive bias to your surveys isn’t a problem in and of itself.
  • Some strategists believe that it is more effective for companies to focus resources on enhancing what they do well instead of on potential areas improvement. The underlying idea is that whatever a company does well is likely its biggest competitive differentiator and maintaining those strengths should be a priority. Proponents of this approach also point out that identifying and fixing problems does little to attract new customers. If your company embraces this philosophy, focusing your voice-of-the-customer programs on the positive will provides the data you need to strengthen what you already do well.
  • For most companies, voice-of-the-customer programs are a cost center – as much as they help companies in the long-term, it is difficult to tie the insights gained to a direct financial benefit. This explains why many VOC programs and other customer surveys are scaled back when budgets get tight. If Dr. Bone’s research stands up to further validation, using positive questions to influence purchase behavior may help justify the cost of voice-of-the-customer programs. Using A/B testing of the survey types and their correlation to long term customer purchasing data would provide you with the insights to show the financial impact of your program.

So what is Isurus’ point of view? To start with, we agree with the broader implication that customer surveys are touch points that influence customer opinions. Customer surveys can…

  • Show that you care about customer opinions
  • Feel relevant to customers, or conversely make it seem you don’t understand their needs
  • Be either a pleasant or a tedious experience for customers
  • Show your hand about future intentions
  • Feel like a burden to customers if they get too many
  • Lose credibility if customers never see any changes based on their feedback

As with any customer touch point, you should manage it to ensure a positive customer experience. Customers should believe the survey was worth the effort they put forth, and that you respect their time and opinions.

The decision to use a positive orientation in your VOC program depends on your competitive situation. If all is going well in terms of growth, profitability, etc. a positive orientation towards your VOC program will likely provide some benefit. However, if growth has slowed, customers are leaving, a competitor is making inroads into your market, etc., you need to know what has gone wrong. Your historic strengths may not be as differentiating as they once were and emphasizing them may contribute to your decline.

For more information on the research visit: “Mere Measurement ‘Plus’: How Solicitation of Open-Ended Positive Feedback Influences Customer Purchase Behavior,” by Sterling A. Bone et al. (Journal of Marketing Research, 2016)