Content Marketing Surveys: Advice for Using Survey Data to Show Solution Effectiveness
Can B2B users of more robust or advanced solutions have lower self-reported performance scores than businesses that use simple or good-enough solutions? The short answer is, “Yes.” And this can throw a monkey wrench into content marketing efforts that use survey data. This post talks about what drives this surprising outcome and how to mitigate it with survey design.
Many B2B marketers use surveys with customers and prospects for content marketing. Their hypotheses going into the survey is that businesses that use their solution or solution category will rate their own performance higher than businesses that do not use the same type of solution in key performance areas such as productivity, efficiency, collaboration, etc. If true, and quantified by a survey, these findings would be great content to use in industry reports, blogs, social media bites, PR, etc. But when the data comes back it often tells a different story.
The self-reported performance ratings for businesses that use the category aren’t any higher than those that do not. Sometimes they are lower. This not only doesn’t validate the story B2B marketers want to tell – it contradicts it.
What is happening?
At the core, behavioral and psychographic factors drive how companies rate their performance. Put simply, companies that adopt more advanced processes and technology have a different mindset than those that take a “good enough” approach. For clarity’s sake, for the remainder of this post we will refer to companies that use advanced solutions as Vanguards and those that use good enough solutions as Conventionalists.
Vanguards demonstrate characteristics of a growth mindset. That mindset is, at least in part, what draws them to new solutions. They measure themselves against what they want to accomplish, not how they perform against competitors. Just prior to the adoption of a new solution, Vanguards are likely to rate their performance low in the areas that the new solution addresses.
Vanguards self-reported performance ratings increase immediately after adopting a new solution. They have a solution for a problem that they wanted to address. But over time they recognize more gaps in their abilities and performance. This is not necessarily due to a drop in satisfaction with the solutions they use, but a reflection of their mindset. They are always looking to do better. What worked yesterday isn’t what they need for tomorrow.
In contrast, Conventionalists tend to be less critical of their own performance and often describe the solutions and processes they use as “good enough.” They recognize that there is always room for improvement, but feel they perform as well as they can with the skills and resources they have available.
When asked in a survey to self-report their performance in areas such as productivity, collaboration, employee engagement, etc. Vanguards and Conventionalist often give the same ratings, even though from an objective perspective the Vanguards likely perform at a higher level on whatever the metric being measured. To use a simple analogy an owner of a local pizza shop is as likely to give themselves the same rating as an owner of a four-star fine-dining restaurant for the metric “Using quality ingredients.”
Bain recognized this dynamic when it developed the Net Promoter Score (NPS) protocol. They validated the NPS as a metric by showing its correlation to profitability based on data available for public companies. The more profitable companies in a category tend to have higher NPS scores.
Measuring business value of your solution in a market research survey
Linking your solution to profitability isn’t a practical approach for most companies or B2B marketers. This type of publicly available information about customers and prospects isn’t available for SMBs or large private companies. Even if it were, linking a functional solution to overall metrics such as profitability may be a stretch. Fortunately, the principle of identifying metrics that highlight differences between Vanguards and Conventionalists can be designed into content marketing surveys.
One option is to determine if there are any objective metrics related to your solution and ask concrete questions about those. For example, if your solution improves fulfillment time, ask “How many days does it take on average to fulfill a customer order?”. If your solution delivers the benefits promised, Vanguards should report that they fill customer orders faster than Conventionalists.
Another approach is to focus on Vanguards instead of comparing them to Conventionalists. With this approach you ask Vanguards the degree to which their processes improved because of the solution. For example, as, “How much did the adoption of the solution category improve your… time to market… collaboration… ability to retain customers… ability to manage workflows…, etc.” This approach highlights the improvements companies gained by use the solution category. It is important to note, that while this approach works fine for marketing content it may not be appropriate if you want your survey to do double duty, e.g., provide marketing content and inform your go-to-market strategies.
At a broader level, it’s a good idea to include multiple metrics that could provide a story to tell about companies that use the solution category. That way, if your original hypothesis isn’t supported by the research, there are other lenses you can view the data through to create interesting and relevant content for your target market.
If you’d like to learn more about how Isurus can help with content market research, fill out our contact form and we’d be happy to set up a call to discuss your specific needs.
Related articles