The limits of research and importance of judgement

An effective research engagement is more than selecting the right methodology, structuring the questions correctly, and summarizing the data. The most critical aspects of a research effort are how the research team and other stakeholders interpret the data and the inferences made from it. The objective of conducting research is to narrow the cone of uncertainty in business decisions, not to create a survey or hold a focus group.

Research is an excellent tool for reducing risks by identifying potential blind spots and validating, or disproving, hypotheses about the marketplace. However, it is just a tool. And as with other data analysis tools, from reporting dashboards to AI, users of the data must step back and approach the interpretation of the data with context and judgement.

Unfortunately, research data, especially quantitative data, can temp users and readers to go down paths the research was never designed for. The following are three recommendations to keep in mind when reviewing and sharing data from a research engagement.

Avoid artificial precision

Attributing an artificial level of precision to a data point is a common in research. Businesses use surveys to gather and present data as metrics: Averages, scores, and percentages. This enables decision makers to create a quantitative profile of their markets and customer base.

When designed and executed well, a survey can accurately profile a market, gauge interest in new products, measure the health of the customer base, etc. However, due to their concrete nature of some metrics, users of the data can sometimes attribute more precision to a data point than is appropriate. This typically happens more with metrics around market share and budgets.

For example, few readers of a research report get hung up on the precision of a metric such as a 4.7 on a 7-point scale. They view it in terms of its relative rating in absolute terms and against other metrics. More importantly, they view it as an indicator of a market trend or perception. However, metrics around market share (32% use Vendor X) and budgets (the market would pay $7,381) have a magnetic pull for many readers. But these second two metrics have the same level of precision as the 4.7 on the 7-point scale. It is their connection to concrete, real world attributes, that make it easy for people to read more into the numbers than they should.

There is also an inherent fuzziness to all survey data. This stems from a variety of factors ranging from the statistical confidence of the sample size to the nature of surveys that require respondents to answer some questions based on memory. Using a consumer example, off the top of your head you may know that your mortgage interest rate is around three and half points, but not specifically if it is 3.3 or 3.7.

In terms of practical business decisions, this level of variability usually has minimal impact on the data – especially in B2B research. However, the key take away is to remember that the data from a survey will always be directional in nature.

Watch out for the NPS effect

As useful as the NPS metric is, an unintended consequence is that it is easy to get overly focused on a single metric. This is especially true if that metrics is used for tracking performance over time. But B2B research is not a political poll that is evaluated by its ability to predict who wins a close election. Research is about identifying trends, threats, opportunities, and blind spots.

A similar misstep to focusing on a single metric is becoming too concerned about statistically significant differences between different metrics. Truth be told, in many data sets, especially in B2B research, the sample sizes are too small to run robust statistical tests. You typically cannot rely on a significance test to identify what trends to pay attention to.

In both above situations, the best approach is to look at the data set as a whole and identify the themes emerging – what is the story the totality of the data is telling.

Recognize research artifacts

Even in the best designed and executed research there are times when the results of questions within a survey don’t align as expected. This can indicate the topic requires further exploration. And it can also simply be an artifact of the research. It is impossible to eliminate every form of bias, confusion, or priming that can happen as a respondent works their way through a survey – especially a long one. The question is, how do you determine which situation it is.

We think the best approach is to use an Occam’s razor and look for what is the most likely driver of the unexpected result. Could the respondents be interpreting the question(s) differently than expected? Did the previous questions prime them to be thinking in a certain way? If it looks like it is more than an artifact of the research, the next question is if the unexpected result is simply an interesting finding or if it would have a meaningful impact on the business decisions at hand. Depending on the answer. It may or may not justify additional exploration to understand what is going on.

Life of its Own

These issues become more pronounced as the data and reporting gets further away from the original research team – the guardrails disappear. The sponsors and designers of the research understand the tradeoffs they make in terms of overall design, the insights the research can realistically be expected to provide, what it cannot, and how best to use the data. But when that data is encoded in a report it tends to take on a life of its own. One way to help control the narrative it to include the methodology and commentary on the use and interpretation of the data when sharing it with a wide audience.

In closing

The best research is part art, part science. The science part involves creating the best design possible within the constraints of budgets, practical realities, and timelines. The art of research is being cognizant of the possibilities and limitations the data presents.

For more of Isurus thoughts on methodologies see the following posts:

Alternatives to conjoint for identifying optimal feature bundles

The importance of anchoring in pricing research