Who you are: What you say


"Research is a process where you can spend a lot of money and come up with zero. Isurus guides me quickly through the key decisions, helps me avoid the pitfalls, and makes sure I walk away with high-value implications."

-Vice President of Marketing, Enterprise Content Management System Provider

Joe Radwich

Joe Radwich
Vice President

Who you are: What you say
08.28.2017

Over the past two decades we’ve help many B2B vendors refresh their brand platform. We notice that some B2B vendors struggle to differentiate the themes and characteristics that can be the pillars of their brand platform from those that may be critical to the market, but do not represent sustainable and/or unique brand positioning.

To help clients identify the difference between the two we use a simple construct that distills things down to the core distinction.

Although almost a cliché at this point, IBM still stands out as an example of effective brand management and continues to illustrate the difference between brand development and market messaging.

We all know the story of how IBM moved from type writers, to mainframes, to desktops, to the internet, etc., etc. We know that the core of its brand platform is using technology to improve business productivity and that what drives productivity evolved over time. After all it’s International Business Machines – not International Business Typewriters.

But did you know that in 2015 IBM bought The Weather Channel’s analytics and modeling technology (it rents it back to The Weather Channel)? IBM recognized that short and long term weather patterns have the potential to disrupt supply chains, manufacturing, deliveries, even purchases. The Weather Channel’s technology enables IBM to incorporate predictions about weather and climate condition into their forecasting models it builds for its clients. These insights in turn help clients plan for disruptions and improve overall productivity.

IBM can talk about this new capability and how it helps businesses adapt to the challenges brought on by client change. Doing so takes advantage of the general awareness, increasing urgency, and broad media coverage of the implications of a changing climate.

However, while IBM may take advantage of the current visibility of, and interest in, adapting to a changing climate, IBM will never incorporate predicting the weather into it’s brand platform. Weather analytics are just a tool that IBM uses. Yesterday it was typewriters. Today its predictive analytics that forecast the impact of climate change on business operations. Five years from now there will be other issues and new tools to talk about.  IBM will talk about those new tools. But their brand will be the same – they will help clients be more productive.

Unfortunately, examples like this give a false sense of how easy it is to determine what makes sense as a brand platform theme and what should be used as a point in time messaging theme. In sectors closer to the commodity end of the continuum there is often less distinction between the vendor and the product. This makes defining a brand platform more challenging, but no less important. There are also some themes that have the potential to be a brand platform for one businesses but not another. Sustainability provides a good example of this duality.

The simple Who you are—What you say construct provides a starting point for evaluating what category potential brand characteristics fall into. The following provides a brief example.

Hospitals factor HIPAA compliance considerations into almost every decision they make. As a result, companies that sell technology solutions to hospitals might consider making helping hospitals stay in compliance a part of their brand platform.

Using the construct, they would ask: Is helping hospitals stay HIPAA compliant…

  • A characteristic that would continue to be true even if we change our offerings?
  • Is HIPAA compliance core to what we do, or a byproduct of it?
  • Would it still have value if staying HIPPA compliant became less challenging?
  • Will there be less challenges with being HIPAA compliant ten years from now?

The answers to these questions will vary by individual vendor and circumstance. However, for most technology vendors, HIPAA compliance is important to deliver and message to, but not something that represents a part of a brand platform.

Refreshing a brand, or creating a new one, requires much more rigorous an analysis than the simple questions in this brand construct model. However, a brand refresh can also put individuals and teams in a state of analysis paralysis. The simple construct questions can help teams get unstuck when they feel overwhelmed and provide a guiding principle to help keep everything in perspective.

Joe Radwich

Joe Radwich
Vice President

Little Data: When B2B marketers don’t have access to big data
04.10.2017

While B2B marketers wait for their organizations to adopt big data tools, they can leverage a range of cost-effective data sources to inform decisions that move their companies forward.

The following are some of low-cost or free data sources (little data) B2B marketers, product managers and strategists can use to evaluate market opportunities, competitors, customer needs and overall market trends.

  • Hoovers/D&B
  • US Census
  • Publicly available reports/articles
  • Mining internal data
  • Interviewing sales reps and account managers
  • Deeper read of customer survey data
  • Social media and job sites
  • Conducting win/loss interviews
  • Reference class comparisons

Hoovers/D&B (or other list sources)

Hoovers (www.hoovers.com) self-service, prospect, list-building tools can help determine the number of businesses in a segment. Once you set your parameters (e.g. Hospitals with at least 500 employees) Hoovers provides counts for the number of businesses in that segment overall and in sub-segments (e.g. hospitals with 500-1000 employees). You can refine your search, start over, etc. to get more data. You don’t pay unless you purchase a list.

US Census Bureau

The Census provides similar data to Hoovers, but has the advantage of historical data which allows you to see if a segment is growing/shrinking (www.census.gov/econ/). One downside is that the Census’ search tools are clunky.

Publicly available reports/articles (desk research)

B2B marketers need to understand the segments they serve, or plan to enter. Insights into market trends, product use, and market dynamics can be gleaned by reviewing publicly-available reports and articles. Government agencies and trade associations produce reports on broad economic trends and share data from market surveys they conduct. Each industry also has its own trade publications that discusses overall trends and challenges.

Analyst reports profile specific markets and product categories. Most analyst reports charge for full access, but there are often nuggets of information in the free synopsis. In addition, the data source is often cited in other publicly-available sources such articles and vendor marketing materials (e.g. we are in the Magic Quadrant).

Many B2B companies issue press releases when they acquire a new customer, make an acquisition, form a partnership, etc. providing insights into the direction the competitor is headed.

Having a reference librarian or researcher can make the information gathering from these sources much more effective and efficient – simply Google searches can waste a lot of time on a goose chase.

Mining internal data

Sometimes, B2B marketers don’t know what data they available because it is spread out across the organization in different systems and databases. Conducting an information audit will identify the data available for analysis. Taking it a step further, combining the disparate data into one database or spreadsheet can provide surprising insights through simple cross-tabs or pivot tables. If you have more advanced analysis/reporting tools, all the better. Some companies have staff and tools that make pulling the data together easy. In others, B2B marketers must slog through the merging, cutting and pasting data and data files.

Interviewing sales & account reps

Sales and account reps have more direct contact with customers and prospects than anyone else in your company. You can use informal one-on-one conversations, structured round-table discussions or simple online survey tools to explore the requests they receive from customers, competitors they see in the market, solutions in place, etc. Their feedback may be biased towards short-term sales, but it can still provide insights into overall market trends.

Deeper read of customer survey data

VOC and NPS surveys provide a great deal of data beyond the overall score – which often gets the attention. For example, reading through the full set of open-ended comments provides context and texture around customer needs. Combining other customer data with NPS data allows you to look at trends and differences across different type of customers (e.g. product segment, sales volume, tenure as a customer, etc.). Manually creating a spreadsheet that combines the data is cumbersome, but often faster than trying to get things combined in your ERP or CRM system.

Social media, and job sites

LinkedIn, Salesforce’s data.com, and Hoovers provide insights into a competitor’s number of employees and revenue. You can see how many staff are devoted to sales and marketing activities, where sales reps are located, etc. Each source will have a slightly different number for these metrics but by looking at more than one you will triangulate on a usable estimate.

Jobs sites such as Indeed.com and the Hiring section of a competitor’s website can indicate if the company is growing and their focus (e.g. are they hiring more marketing people than would be expected, a certain type of engineer, etc.).

A search of SlideShare can uncover sales and investor presentations and similar materials that show how a competitor is presenting itself to the market.

Conducting win/loss interviews

Conducting interviews with recent wins and losses provides insights into the market’s decision process, vendor evaluation criteria, and how you compare to competitors. Win/loss interviews are best conducted by a third-party partner. If using an external partner isn’t in the budget, the interviews should be conducted by someone not connected to the sales process.

Reference class comparisons

The goals of data-driven decisions are to predict the future and narrow uncertainty. Reference class comparisons—an analytical approach, rather than a data source—can help frame likely future outcomes. The past doesn’t necessarily predict the future, but it’s a good place to start. Future product introductions, acquisitions, entries into new markets, etc. are likely to unfold similar to previous efforts.  If you predict that the new effort will be significantly different from the past it forces you to evaluate and identify the data that supports your assumptions.

Tying it together

The secret sauce of big data is that it combines disparate data to form a conclusion based on the relationships that exist. Big data uses algorithms, models, and machine learning to create an outcome that is greater than the sum of its parts. B2B marketers do this daily using little data, basic tools and their experience and judgement – the original big data machine.

Collecting and combining the data sources is time consuming and tedious. It also doesn’t answer all the questions – every decision is based on incomplete data. However, it’s worth the effort: It provides insights that will help B2B marketers evaluate market opportunities, competitors, customer needs and overall market trends.

Joe Radwich

Joe Radwich
Vice President

Does your VOC program encourage purchase behavior?
02.07.2017

Your NPS or customer satisfaction survey can increase the frequency and number of purchases a customer makes over time.

Research by Sterling Bone of Utah State’s Huntsman School of Business indicates that starting customer feedback surveys focused on what you do well increases NPS and satisfaction scores in the short-term, and can increase the frequency and number of purchases a customer makes with you over the long run.

The boost in NPS or satisfaction is expected – asking people questions like “What was the best part of your experience?” puts them in a more positive frame of mind than does asking them to remember how you’ve fallen short. And enough existing research shows that people in a more positive mood give higher ratings on surveys.

That this positive nudge influences purchase behavior months after the survey comes as a surprise. We usually think about how experiences drive the ratings on surveys, not how surveys may influence behaviors. The researchers have two hypotheses for this effect: Asking “what went well?” generates a positive feedback loop by surfacing good memories for the customer; and, that people unconsciously try to avoid cognitive dissonance – if we said something positive about a product, we don’t want to seem inconsistent by not continuing to purchase it.

The findings raises the question: Should you adjust your voice-of-the-customer or NPS program to take advantage of this phenomenon? National brands such as Subway and Jet Blue already incorporate these ideas into their customer feedback programs. There are four broad justifications for making the change.

  • Traditional research philosophies reject this type of manipulation and strive to minimize any impact the artifice of a survey has on the results. We don’t want to bias the data. The reality is that most research is biased (more on that later). If asking about positive experiences primes some customers to be happier with a product or service, then the opposite is probably also true. A survey that asks customers to focus on problems may prime them to be less happy then they actually are. More research is required to answer that conclusively, but it would be surprising if the phenomenon only works in one direction.
  • There is bias in every data set – the mistake is not recognizing its nature. If you know what it is you can use judgement to account for it as you make inferences, and strategy based on that data. If your VOC program shifts from a problem-finding focus to a positives-focus, expect an increase in NPS and satisfaction at first due to the positive bias. Scores will stabilize as the new protocol is repeated over time. The easy mistake to make would be to forget that the one-time increase in the metrics is due primarily to a change in the survey instrument. But if you account for the one-time jump in ratings and reset your baselines, having a positive bias to your surveys isn’t a problem in and of itself.
  • Some strategists believe that it is more effective for companies to focus resources on enhancing what they do well instead of on potential areas improvement. The underlying idea is that whatever a company does well is likely its biggest competitive differentiator and maintaining those strengths should be a priority. Proponents of this approach also point out that identifying and fixing problems does little to attract new customers. If your company embraces this philosophy, focusing your voice-of-the-customer programs on the positive will provides the data you need to strengthen what you already do well.
  • For most companies, voice-of-the-customer programs are a cost center – as much as they help companies in the long-term, it is difficult to tie the insights gained to a direct financial benefit. This explains why many VOC programs and other customer surveys are scaled back when budgets get tight. If Dr. Bone’s research stands up to further validation, using positive questions to influence purchase behavior may help justify the cost of voice-of-the-customer programs. Using A/B testing of the survey types and their correlation to long term customer purchasing data would provide you with the insights to show the financial impact of your program.

So what is Isurus’ point of view? To start with, we agree with the broader implication that customer surveys are touch points that influence customer opinions. Customer surveys can…

  • Show that you care about customer opinions
  • Feel relevant to customers, or conversely make it seem you don’t understand their needs
  • Be either a pleasant or a tedious experience for customers
  • Show your hand about future intentions
  • Feel like a burden to customers if they get too many
  • Lose credibility if customers never see any changes based on their feedback

As with any customer touch point, you should manage it to ensure a positive customer experience. Customers should believe the survey was worth the effort they put forth, and that you respect their time and opinions.

The decision to use a positive orientation in your VOC program depends on your competitive situation. If all is going well in terms of growth, profitability, etc. a positive orientation towards your VOC program will likely provide some benefit. However, if growth has slowed, customers are leaving, a competitor is making inroads into your market, etc., you need to know what has gone wrong. Your historic strengths may not be as differentiating as they once were and emphasizing them may contribute to your decline.

For more information on the research visit: “Mere Measurement ‘Plus’: How Solicitation of Open-Ended Positive Feedback Influences Customer Purchase Behavior,” by Sterling A. Bone et al. (Journal of Marketing Research, 2016)

Joe Radwich

Joe Radwich
Vice President

Win/Loss Analysis – Triangulating on the truth
05.13.2016

Understanding why deals are won or lost can seem like a game of telephone or “He said, she said”:  The Buyer gives one side of the story, Sales has another, and Marketing brings its point of view too.  Who is right?  Everyone’s partly right and partly wrong.  In our experience, the best process combines all available information streams to triangulate on the truth.

Companies often implement win/loss programs when they encounter a rough patch: Sales have flattened or declined, or a new competitor emerges with unexpected success. These dynamics create internal insecurities, finger pointing, and at worst, eroded trust between functional teams. Sales feels Marketing is out of touch with the front lines, Product Management believe Sales uses price as an excuse, and so on. To settle the dispute, a third party like Isurus Market Research is brought in to provide an objective, outside perspective. With no vested interest in the outcome, we listen to wins and losses with an unbiased ear (e.g. we can tell if comments about price reflect actual decision drivers or if they are a red herring). People talk more openly with a third party than with someone who wants to sell them something. This enables a third party, like Isurus, to elicit insights that are unavailable to internal teams.

While results from win/loss interviews provide significant value, we encourage clients to use the insights we bring judiciously.  No win/loss program or research firm can provide the end-all be-all of reasons for a client’s wins and losses. There simply is no single point-of-truth. The process is analogous to a criminal investigation.

In an investigation detectives take statements from multiple witnesses and often from the same witness multiple times. Rarely do the stories match perfectly. There are inconsistencies across witnesses and from the same witness over different conversations. Lawyers and investigators evaluate the circumstances and possible motivations. Then, as a team, they take these various information streams to develop the most accurate picture possible of what happened. In the end it’s the collective knowledge that makes the difference – not any single interview or perceived circumstance.

Effective win/loss programs have the same dynamics and follow the same process. The sales reps have multiple conversations with the loss, some during the sales process, some after the decision has been made. The product team has circumstantial evidence on how its products fit the specifics of the RFP. Marketing has insights into competitive advantages/disadvantages. Outside research firms like Isurus provide insights into motivations, perceptions and additional circumstantial evidence. Taken collectively these provide the most accurate picture of why the prospect likely made the decision they did.

This is a mix of good and bad news in terms of setting expectations for a win/loss program.

The bad news first: There aren’t any silver bullets. Often by the time organizations engage a third party for a win/loss program they’re frustrated and looking for a single, definitive thing they can do that will improve their sales outcomes. Unfortunately, the solution is more complicated than that. If a single factor makes a major difference most companies would have figured it out long before looking for outside help. In addition, getting the most out of a win/loss program takes work. The functional teams need to spend time together dissecting the lost deals. This includes thinking through all of the information available and figuring out how to square the circle when Sales heard one thing and the win/loss consultant heard another.

Now the good news: The efforts will bear fruit. When functional teams regularly work together to evaluate why they win or lose deals they start to see patterns. They can then make systematic changes that lead to long term success, which is far better than reacting to the one-off circumstance of any individual lost deal. The process also brings the functional teams closer to the end-customer and helps them develop a shared understanding of the value the company brings to the market.

The takeaway: Relying on a partner for a win/loss program will get you part of the way there. But the most successful programs get more than simple buy-in from the functional teams, they bring the teams together as an investigative team. So before you evaluate potential win/loss vendors, you should evaluate your internal teams and how you will work together to triangulate on the truth.

Joe Radwich

Joe Radwich
Vice President

Are Rising NPS Scores a Red Flag?
03.28.2016

Reports that track NPS scores across a range of sectors—recently reported that NPS scores in the enterprise software sector have increased across the board – most vendors have seen an uptick in their scores. While good news, it raises some questions about the accuracy of NPS scores.

Glass Half Full – Rising NPS Scores Reflect Overall Improvements

Bain created the NPS approach to help companies gain a better understanding of how well they meet customer needs and to provide a structure for addressing areas in need of improvement. The approach has gained widespread use since popularized in the book The Ultimate Question. Like other management approaches that made material improvements to the way businesses operate—LEAN, Six-Sigma, Agile—NPS may have cultivated a genuine shift in executive recognition of the importance of listening, and responding, to customers. If so, this explains the industry-wide increase in NPS scores – by responding to the insights gained in NPS programs, businesses are turning Detractors into Promoters.

We’d like this to be true, but have some concerns.

Glass Half Empty – Increases May Reflect Data Collection Issues

The systematic rise in NPS scores may indicate a methodological problem with data collection – ironically, one that NPS was designed to address. Back in 2009, Bain and Fred Reichheld identified two failures of typical customer satisfaction programs:

  • The average customer satisfaction survey had become an arduous experience for customers due to survey length and minutia covered. This resulted in low response rates and a bias towards happy customers who were motivated to take the survey.
  • Many customer satisfaction programs had become rote: An annual program that executives reviewed once a year and then put on a shelf, with the results never making it to frontline employees.

The NPS approach addressed these problems by proposing a very short survey 3-5 questions and delivering the results in real time (or close to it) to front line employees so they could make the needed changes.

Unfortunately, NPS may be a victim of its own success. The Ultimate Question, did indeed become the ultimate question – every sector asks it. And with the rise of easy-to-use, inexpensive online survey platforms the average person gets NPS surveys weekly (if not daily) from a range of companies they interact with personally and professionally: Their banks, hotels, enterprise software vendors, doctors, auto repair shops, energy providers, restaurants, etc. Combined with the other online surveys they receive, people are now bombarded with surveys. As a result, overall response rates to online surveys continue to decline. No single survey causes the problem, but in a classic tragedy of the commons, the abundance of surveys has lessened their value overall for everyone.

With decreasing response rates the same challenges of traditional customer satisfaction creeps in – happier customers are more likely to take the survey, increasing the ratio of Promoters and raising NPS scores.

Online survey platforms make data collection easier than ever so companies move forward with the survey side of the NPS equation (the easy part) without operationalizing how they will use the data (the hard, and according to Bain, more important part). In many businesses, NPS is run the same way as customer satisfaction surveys used to be – it’s just a metric tracked quarterly or annually, reviewed by management, and never operationalized on the front lines.

NPS remains a useful metric, but may have lost some of what differentiated it from the previous ultimate questions – loyalty, affinity, future purchase intent, satisfaction, etc.

Regaining the Value of NPS

If NPS programs are at risk of providing false positives and therefore losing value as a management tool, what can be done to reinvigorate them? We see two potential solutions.

Find the detractors: When NPS scores are going up, but response rates are trending down, it’s possible detractors are dropping out of the respondent pool. Companies can adjust their data collection efforts to ensure they get a more representative mix of customer experience, by adding a telephone survey to their data collection efforts and having managers, product marketers and executives reach out to random customers as a reality check.

Change the NPS calculation: In the six years since NPS was developed the dynamics around recommendations may have evolved. Social media and online commerce may be lowering the threshold required for the average person to provide a recommendation. Almost every online purchase asks you to provide a rating or review, LinkedIn daily suggests colleagues you can recommend, retweeting and Likes are binary endorsements, etc. The constant rating of experiences may be leading to a de facto grade inflation effect. If that’s the case, changing the NPS criteria for detractors from 0-7 or 0-8 instead of 0-6 may provide a more realistic view of the health of the customer relationship.

In the past six years NPS programs have changed the way companies think about, and measure, their relationships with customers, largely for the better. But as with any tool or approach, it shouldn’t be followed blindly. All good techniques evolve with changes in conditions and circumstances and it may be time to consider refinements to NPS programs.

Joe Radwich

Joe Radwich
Vice President

Use Research to Add Foxes to Your Hedgehogs
02.22.2016

A year of political polling, talking heads and pundits have given a bad name to forecasting and forecasters – especially the segment known as hedgehogs. But for all their flaws hedgehogs can guide their organizations as well, perhaps better, than their counterparts – the foxes. In truth, successful organizations have a mix of both.

The research and literature of the science and art of forecasting divides the world into hedgehogs and foxes. The labels and definitions come from an essay by philosopher Isaiah Berlin.

  • Hedgehogs know one big thing. They hold a specific worldview that they interpret conditions and situations through. In the most extreme, hedgehogs believe all things are guided by a big underlying principle—capitalism, communism, consumerization of technology—to the point of dogma. Even more moderate hedgehogs tend to have biased perceptions which makes their record as forecasters no better than chance. Hedgehogs come in all political, philosophical and business shapes and sizes.
  • Foxes know many little things. They tend to hold back judgements and approach the world from a neutral standpoint, or at least more neutral relative to hedgehogs. Because they aren’t predisposed to view the world through a specific rubric they see a wider set of dynamics shaping situations and conditions. Analysis of forecasting success—getting it right—shows that foxes do a better job than hedgehogs.

This oversimplification of a simplification makes it seems as if foxes would make better managers and leaders than hedgehogs. This isn’t necessarily true. The benefit of being a hedgehog or fox depends on context and the objectives at hand.

The traits that make a hedgehog less effective as a forecaster helps them be effective leaders. Building internal momentum for a new product, entering a new market, reaching a revenue goal or running a startup requires passion and a degree of unbridled optimism and bravado. A hedgehog’s worldview helps them create clear, simple to communicate narrative, of what is happening and what needs to be done. This helps build enthusiasm and commitment among the troops. The fox’s narrative which to a degree says, it’s complicated, it depends on a number of things, and here is the most likely outcome, but not a definite one, isn’t always all that compelling – even if it presents a more accurate description of the world.

The risk of course is the hedgehog’s approach can blind them, and their organization, to competitive and market threats and risks. Many market failures stem in some part from doggedly pursuing a course that is at odds with market dynamics. This is an acute risk in when the conditions or markets are in transition. Startups can overestimate the readiness of the market and the paradigm shifting power of their new product or service. Companies that have been successful doing the same thing for decades dismiss new competitors and ways of doing things. Compounding the problem is that with time foxes can turn into hedgehogs. Internal teams start to view the world from the perspective of their product rather than the other way around. They drink the proverbial Kool-Aid. This is human nature. When you live and breathe your product or service every day it shapes how you understand your customer and markets.

The best leaders and management teams have a mix of traits from both camps – the possess a clear narrative vision informed by a thorough understanding of the dynamics of their markets. The problem is these individuals and teams with these traits are hard to come by. There are only so many Jobs, Bezos, Welches, and Buffetts to go around.

If you worry that your organization has tilted too far into the hedgehog camp, market research can bring foxes, or at least a fox’s perspective, back to your strategies. When done well secondary and primary research can provide an unbiased understanding of the dynamics underlying your market: Your customers’ and prospects’ needs, priorities and opinions and perceptions about their options (including your product/service). With this you get the best of both approaches. The hedgehog’s perspective can rally the troops; the fox’s perspective can point the troops in the right direction.

Joe Radwich

Joe Radwich
Vice President

How to Think Like George Washington and Abraham Lincoln
02.03.2016

In honor of the upcoming Presidents’ Day, here’s a look at one of the most respected traits Washington and Lincoln shared that can provide guidance to everyone from the CEOs to product manager and market teams.

They saw the world the way it was, not the way they wanted it to be, or thought it ought to be.

This trait has been universally viewed as a key to their ability to develop effectives strategies at two of the most critical points in our country’s history. It gave them a clear and objective view into the situations they faced and a realistic estimate of the strengths and weaknesses of their positions.

This may sound easy, but for many of us our desires, biases and enthusiasm for our products, companies and teams color our perspectives and prevent us from seeing things the way they actually are. Here are two techniques for viewing our markets and positions within them more like Washington and Lincoln; and one from Benjamin Franklin who while not a president had plenty of sage advice.

1. Identify your biases: Most of us are biased in the areas that matter most to our success. A simple way to identify these biases and blind spots is to make a list of the conditions that must be true for your product or strategy to be successful. Examples could include that the market considers a problem a priority to solve relative to other challenges, or that the market is dissatisfied with existing solutions, or that your sales reps will set 10 appointments a week. Once you have the list of conditions, objectively review the assumptions and evidence underlying each condition. The areas where you may be over- or under- estimating threats and opportunities will become apparent.

2. Make reference class comparisons: Individuals and teams tend to be inherently optimistic about their plans and strategies. We expect things to go well, and closer to the best-case scenario than to the worst. Reality usually falls somewhere in between. Most new endeavors will encounter the same challenges as previous internal efforts and/or as businesses that have faced similar situations. When planning product introductions, acquisitions, and sales & marketing campaigns look back at previous internal efforts and at case studies and examples outside the organization to see how these efforts panned out as a starting point for your forecasts. All things equal, your new product, campaign, etc. will likely follow a similar path. If you think that your efforts will go better, faster, etc. make a list of the reasons you believe that your efforts will be different than the norm. Then use Tip 1 to make sure you don’t have any blind spots or faulty biases.

3. Use Franklin comparisons: When most of us look at the strengths of our products or favorite sports teams we tend to forget that many competitors share those same strengths or have different strengths that compensate. To compare your offering to competitors (direct or indirect) make a list on the left side of a piece of paper of what you think your strengths and weaknesses are. On the right side, make a list of what the market thinks the strengths and weaknesses of your competitor are, not what you think they are. When generating the competitor list Tip 1 can help eliminate the biases you may have. Once the two lists are complete compare them and cross out any points of parity and offsetting conditions, e.g., the competitor’s brand equity offsets your better functionality. This exercise helps identify the mindsets and perceptions your sales & marketing efforts will encounter on the ground.

These three techniques will not make you as brilliant a leader as Washington or Lincoln, but they will help you see your market how it is, not how you want it to be. Paraphrasing Franklin, these techniques will make you wise and help you develop strategies that avoids common pitfalls and to set realistic expectations.

Joe Radwich

Joe Radwich
Vice President

Design Thinking in Research
01.21.2016

Although it’s been around for decades, Design Thinking is enjoying a burst of heightened awareness as recent articles and books advocate the approach for everything from reaching corporate objectives, to developing advertising and value propositions, to achieving your personal New Year’s resolutions.

As a research firm we applaud this reawakening of the value of design thinking – its principles have always been a central part of thoughtful primary research designs.

Design thinking traces its origins back to the late sixties and early seventies, when scientists and engineers began talking about solution thinking. Commercial market research as we know it today developed around that same time. Whether the two disciplines influenced each other or are an example of convergent evolution, the results are the same – both design thinking and formal market research share a philosophical approach to understanding customers and markets.

The five broad step in design thinking are:

  • Empathize: Gain an understanding of the user, their world and their needs
  • Define: Define the real, underlying problem(s), not necessarily the one on the surface
  • Ideate: Generate a number of different solutions that could address the problem
  • Prototype: Create a prototype(s) of the solution(s) that you feel best addresses the problem
  • Test: Get reactions to the prototype(s).

Well designed research follows similar principles.  The catch phrases and analogies used in market research text books, manuals, presentations, etc. echo design thinking.

  • The problem the client comes to you with is not the problem
  • The customer doesn’t want a drill; they want a hole
  • Move from a product/engineering orientation to a market orientation
  • Taking the outside view
  • Provide the voice of the customer

Analogs of the broad components of design thinking are built into large multiphase studies, dynamic Agile research engagements, and are even present in standalone studies.

  • Multiphase research: These studies typically start with exploratory qualitative research to understand the customer’s day-to-day processes, needs, wants and challenges. The development team takes what they learn and develops/refines the solution for the market’s problems. The solution is then evaluated with more qualitative research or a quantitative survey. Further research is conducted as needed.
  • Agile research: Agile research is a phased approach that breaks the design into multiple, smaller research components within a compressed schedule.
  • Standalone study: In a single study, discussion guides and surveys include an exploration of how the customers do things today and the challenges they face before testing reactions to a solution, ad, etc.

Although design thinking principles appear obvious and easy to follow, it can be hard to do so in the real world when an internal team has developed a potential solution it feels strongly about. The development process and accompanying research can end up focused on the solution rather than the customer. They compare solution features and focus questions on what the customer thinks of the product. This can result in a false positive: In a side-by-side comparison the solution looks superior to competitors, and customers at the trade show seemed excited about it, but once introduced sales fall short of expectations.

Steve Jobs and Apple provide one of the best-known examples of design thinking – and research. It is a commonly believed myth that Apple never does any research. It does, it just focuses research on customer needs rather than their reactions to product ideas. Jobs believed successful products require a deep understanding the customer’s world.  We agree, and bringing this “outside in” perspective is the value research provides. Some of our clients are very internally focused or have a strong engineering background. At the beginning of the study they question if we have the technical background to fully understand their products. We tell them: We don’t have to; we’re there to help them understand their customers.

Regardless of the challenge—developing a new advertising campaign or improving your retirement planning—the principles used in research and design thinking can help you come to a solution that addresses root causes and drivers rather than surface appearances.

Joe Radwich

Joe Radwich
Vice President

Better Forecasting with Historical Data and Judgment
08.24.2015

A recent article on forecasting presents historical data and judgment as an either-or choice. We disagree. In our view, the art of forecasting requires both and the understanding of how much weight to place on each depending on the circumstances.

The article outlined research conducted by Dr. Matthias Seifert and his team at The IE Business School in Spain. They evaluated the value of historical data vs. judgement in forecasts for volatile sectors such as fashion and entertainment. These fast moving markets provide a compressed, microcosm to study how demand unfolds at a slower pace in other markets – similar to how geneticists study fruit-flies to understand the transmission of genes in humans. Their data indicate that in volatile markets, historical data provide limited value for predicting the success of new products. For example, forecasts based on a musical artist’s last album do a poor job of predicting the success of their next endeavor. The forecasts become more accurate when based on predictor variables such as the amount of marketing behind the first single and the other artists releasing music at the same time.

This may seem obvious, but that is due to the simplicity of the example. The dynamic exists in more complex markets and situations, it’s just harder to see. The business literature is strewn with examples of companies that based their forecasts and strategies on the status quo rather than look at variables that predicted a change in demand and paid the price (Kodak, Blackberry and Microsoft to name a few). The obvious ones to consider include the current competitive set, existing products, and marketing activity. Innovators and disruptors such as Steve Jobs and Elon Musk take the process a step further and evaluate factors such as products and competitors that may enter the market and what people are trying to accomplish rather than the specific products they use. Seeing the broader picture comes naturally for people like Jobs and Musk.  For the rest of us, the book Winning the Long Game: How Strategic Leaders Shape the Future by Steve Krupp and Paul J. H. Schoemaker offers some pointers and practices for looking beyond the status quo.

One of the primary challenges with identifying predictor variables is that they aren’t always evident. As a researcher Dr. Seifert had the luxury of comparing past actions and circumstances with outcomes. The second challenge is that it is easy to get carried away in identifying tenuous predictor variables and then assigning them more power than they likely have in order to create a plausible scenario that fits the outcome we are seeking. Using a reference-class comparisons and historical data provide a disciplined way to reign in this tendency. It involves systematically evaluating historical trends and what other organizations have experienced and asking the question: Why do we expect things to be different. This exercise helps put your hypotheses and variables in perspective and clarifies the degree to which they are likely to make the future different than the past.

At Isurus we include predictor variables and historical data when designing forecasting surveys. Predictor variables include brand awareness, information sources, price, etc. Pain points, desires and motivations fall into this category as well. It’s cliché, but people don’t want to buy a drill – they want a hole. Historical data and reference class comparisons aren’t perfect predictors of the future – but in most cases they are within a standard deviation or two from it: If consumers or businesses have always purchased on price, they are likely to continue to do so in the future.

We think the big take away is that the best forecasts use both historical data and judgement. To ensure you systematically include both, use these three questions when creating your next forecast:

  • What would history predict the future would be?
  • What factors got the market to its current state? (Level of marketing, competitors, technology, features and functionality, etc.)
  • What might be changing or might we proactively change that would make the future different from the past? (New delivery model, combination of features, etc.)

Joe Radwich

Joe Radwich
Vice President

Spurious Correlations: Shark attacks and sales at all-you-can-eat buffets
07.23.2015

Did you know that the Total Revenue Generated by Arcades correlates with the Number of Computer Science Doctorates awarded in the United States? Makes sense, right?

Hold on before you start using this fact to impress people at cocktail parties. It comes from Tyler Virgen’s website Spurious Correlations (which is also available as a book on Amazon). Virgen’s mines data and plays with the X and Y axes to create ridiculous but fun correlations such as the link between margarine consumption and divorce rates in Maine and the link between drownings in pools and movies starring Nicholas Cage.

As silly as these correlations are, they illustrate how our minds automatically look for relationships between events and create explanations for them. It’s easy to ignore the base rate and craft a plausible story that people who like video games like computers or that the use of margarine in the 1960’s was tied to the weakening of traditional family values. This tendency is built into our genes and has helped our species survive for millennia – it’s better to see and overreact to false positives than miss a real threat. Unfortunately, in modern times it leads us to things such as confirmation bias which makes us to see trends in the data that support the story we want to tell.

As marketers and strategists we must be vigilant to ensure we see the data for what it is, and not what we want it to be. We also need to think about how others will interpret the data we present in charts and reports when we aren’t there to explain them. Charts that exaggerate differences for the point of discussion can easily be misconstrued the further they are distributed across the organization.

So be careful with your data, you don’t want to be the one in your organization that identifies the correlation between shark attacks and sales at all-you-can-eat buffets.