To make a claim more believable simply add a chart – that is the key finding of a study by Aner Tal and Brian Wansink of Cornell University. Their research shows that people find data presented in a chart more believable because they associate charts with science. Their findings shed some light on the artificial importance sometimes placed on statistics in quantitative B2B research. When evaluating data from any study—especially in B2B—questions about statistics can trip people up causing them to lose sight of the broader implications of the data. The dynamics driving Tal’s and Wansink’s findings probably explain this tendency as well – statistical terms sound scientific.
Here are three examples of how focusing too much on statistics sometimes causes us to lose the forest for the trees:
Exaggerates the differences: As a group Millennials are statistically different than Boomers in their use of technologies such as Twitter. However, “statistically” different does not mean “radically” different. The difference between the attitudes and preferences of Millennials and Boomers can be as low as is 10% (sometimes less), which may be statistically different but not meaningful from a practical standpoint. For example, if 10% of Boomer and 20% of Millennial decision-makers use social media to communicate with their vendors the key insight is that approximately a small but meaningful segment of your customers likely follow your Twitter feed today and the number is likely to grow over time. However, from an execution standpoint most B2B companies would not (or could not) execute a marketing strategy that took advantage of the slight, but statistical difference between the groups – they are not different from a business management standpoint.
Misses macro trends: On a list of attribute ratings some scores are likely to be statistically different than others. For example, Attribute X is statistically more important than Attribute Y to customers, or Company A does statistically better than Company B on Attribute Z. However even if there are no statistical differences in the data, but Company A consistently performs better than Company B across a range of attributes, from a business management standpoint it is safe to say that something is going on – where there is smoke there is usually fire.
Creates artificial precision: Statistics are meant to compare of relatively objective measurements – the length of nails coming out of a factory, the temperature in a weather station in Greenland, the average price of a home, etc. Applying these principles to something as subjective as human preferences or attitudes creates a degree of artificial precision. People interpret questions about responsive service, innovation, etc. differently and the same person’s perceptions can vary depending on the context of their day. Furthermore, when evaluating complex products and services B2B decisions makers cannot isolate their feelings about one attribute when providing a rating for another. The law of averages normalizes most of these issues and the general direction and magnitude of ratings questions will reflect market realities. However, their statistics aren’t carved in stone.
Despite these drawbacks statistical analysis should be a part of your analysis plan – it can provides useful insights, help you avoid red herrings and uncover nuances in the data. However, it is only one of many factors to consider when making conclusions and inferences about the data. An accurate and actionable interpretation of any data set requires evaluating the overall trends in the data and incorporating existing knowledge, expertise, and judgment. Only when you combine all of those things is does research become a useful tool for making and executing better business decisions.