Growth & Strategy

Competitive Analysis: Why You Keep Studying the Wrong Competitors

In the summer of 2007, BlackBerry controlled nearly fifty percent of the U.S. smartphone market. The company had 12 million subscribers, its stock was trading near all-time highs, and its co-CEO Jim Balsillie was so confident in BlackBerry's position that when asked about Apple's iPhone launch in June, he told the Toronto Star, "I haven't seen one." His co-CEO, Mike Lazaridis, watched Steve Jobs demonstrate the device at Macworld and fixated on what he knew about network engineering: "It's going to collapse the network." BlackBerry's COO Larry Conlee assessed the competitive threat and concluded the iPhone had "rapid battery drain and a lousy keyboard."

They were doing competitive analysis. And every single conclusion they reached was correct.

The iPhone did drain batteries faster. The touchscreen keyboard was objectively slower than BlackBerry's physical one. The initial device couldn't even run third-party apps. By every metric BlackBerry's team evaluated, the iPhone was an inferior product for their core customer. Enterprise IT departments weren't going to abandon encrypted BlackBerry servers for a consumer toy that couldn't even copy and paste text.

But here's what their competitive analysis never surfaced: the real threat wasn't the iPhone. It was what the iPhone implied. Apple wasn't building a better BlackBerry. It was building a platform — a pocket-sized computer that would attract hundreds of thousands of developers who would build the applications that would redefine what a phone was for. And when Google launched Android the following year as a free, open-source operating system that any manufacturer could use, the competitive ground shifted in a direction BlackBerry's analysis couldn't even frame. BlackBerry was analyzing handset features. The market was moving to ecosystems.

When Lazaridis complained to the New York Times about the iPhone's keyboard — "I couldn't type on it and I still can't type on it, and a lot of my friends can't type on it. It's hard to type on a piece of glass" — he was making an accurate observation about a metric that was about to become irrelevant. By 2013, BlackBerry's market share had collapsed from nearly fifty percent to less than three. The company wrote down $4.7 billion on unsold inventory of the BlackBerry 10, a device that arrived six years too late to a market that had moved on without it.

The analysts at BlackBerry were not stupid. They were thorough, well-resourced, and deeply knowledgeable about their industry. And that is precisely what made their competitive analysis so dangerous. They studied the competition through the lens of their existing advantages, security, physical keyboards, enterprise integration, and the data confirmed exactly what they already believed. The iPhone was inferior. Android was fragmented. Neither could match BlackBerry's enterprise features. All true. All irrelevant. The entire competitive framework was a machine for confirming assumptions that happened to be correct on the metrics that happened not to matter.

That's not a BlackBerry problem. That's a competitive analysis problem. And if you're doing competitor research right now, there's a good chance the same machinery is running inside your spreadsheet.

What Competitive Analysis Actually Is (and What It Pretends to Be)

Competitive analysis is the systematic evaluation of your competitors' strengths, weaknesses, strategies, and market positions to inform your own strategic decisions. In theory, it's a tool for seeing the terrain clearly. In practice, it's one of the most reliable engines of self-deception in business.

The reason is structural, not personal. Every competitive analysis begins with a framing decision: Who are our competitors? Which dimensions matter? What data should we collect? These framing decisions feel neutral. They feel like preparation. But they are the decision. The moment you define who belongs in the competitive set and which attributes to evaluate, you have already constrained the analysis to confirm a particular view of the world. If BlackBerry defines the competitive set as "enterprise smartphone manufacturers" and the evaluation dimensions as "security, email integration, keyboard quality," the iPhone loses every comparison. The analysis isn't wrong. The frame is.

Michael Porter's Five Forces model (the most widely taught competitive analysis framework in business schools since it was introduced in 1979) instructs companies to evaluate five categories of competitive pressure: the intensity of rivalry among existing competitors, the threat of new entrants, the threat of substitutes, the bargaining power of suppliers, and the bargaining power of buyers. The framework is elegant, and it's useful for understanding the current state of a competitive environment. Its blind spot is what Andy Grove, Intel's legendary CEO, called the "10X force" : a change so dramatic that it doesn't just shift the balance of Porter's forces but renders the entire framework temporarily useless. When the 10X force arrives, the companies doing the most rigorous analysis of the existing competitive terrain are often the last ones to see it, because their analytical rigor is anchored to a map that no longer describes the territory.

This isn't speculation. It's pattern.

The Three Biases That Corrupt Every Competitor Comparison

Competitive analysis goes wrong in predictable ways, and nearly all of them trace back to three cognitive biases that operate below the level of conscious awareness.

Confirmation Bias: The Data You Choose to Collect

Confirmation bias is the tendency to seek, interpret, and remember information that confirms what you already believe. Daniel Kahneman called it "the most pervasive and the most potentially catastrophic of the cognitive biases." In competitive intelligence, it doesn't announce itself as bias. It announces itself as thoroughness.

Here's how it works: A product team at a mid-stage SaaS company decides to evaluate three competitors. They build a feature matrix, dozens of rows, covering everything from integrations to pricing tiers to mobile functionality. They populate the matrix with data from competitor websites, review sites, and sales intelligence tools. The matrix is comprehensive, rigorously sourced, and completely misleading, because the features they chose to evaluate are the features they already build, the dimensions where they already win. The competitor's advantage in onboarding speed, or community engagement, or developer ecosystem doesn't appear in the matrix. Not because the team deliberately excluded it. Because the frame of the analysis made those dimensions invisible.

Research from Harvard Business School's Max Bazerman found that firms implementing debiasing strategies saw an average increase of seven percent in return on assets. That number suggests the scale of what confirmation bias costs: not through dramatic failures, but through the quiet, ongoing distortion of strategic decisions that are based on data sets that feel complete but aren't.

A European tire manufacturer called Eurotires illustrates the pattern. When a rival, Eastires, began gaining market share, Eurotires' analysis attributed the gains to short-term pricing strategies rather than improved product quality. Internal analysts focused on negative reviews of Eastires' products and dismissed positive customer feedback as anomalous. The conclusion, that the competitor's gains were unsustainable, felt data-driven. It was, in fact, a curated selection of data points that happened to support the conclusion the team had already reached before the analysis began. Eurotires lost market leadership while studying data that told them they were winning.

The Halo Effect: When Reputation Replaces Analysis

Phil Rosenzweig, a professor at the International Institute for Management Development in Switzerland, spent years studying how people evaluate company performance. His conclusion, published in The Halo Effect, is that nearly all assessments of corporate success are contaminated by a single cognitive distortion: when a company is performing well, observers attribute that performance to visionary leadership, brilliant strategy, and strong culture. When performance declines, the same leadership is suddenly described as arrogant, the same strategy as reckless, and the same culture as toxic.

The halo effect poisons competitive analysis in both directions.

When you analyze a market leader, the halo inflates every dimension of their performance. Their product isn't just good, it's the standard. Their marketing isn't just effective, it's the playbook. Their culture isn't just functional, it's the model. The result is that competitive analyses of market leaders consistently overestimate the coherence and intentionality of the leader's strategy. You're not seeing the company as it is. You're seeing a story about why they're winning, and that story shapes what you notice, what you measure, and what you conclude.

When you analyze a struggling competitor, the halo works in reverse. Everything about them seems broken, leadership, product, positioning, and the result is systematic underestimation of the threat they pose. Eurotires analyzing Eastires. Blockbuster analyzing Netflix. Borders bookstores analyzing Amazon in 2004, when the online retailer was still losing money and had never turned a consistent annual profit. The inverse halo turned a genuinely dangerous competitive threat into a punchline.

Rosenzweig's most important insight is that the halo effect is not sloppy thinking. It is the default mode of evaluation. Without deliberate intervention, every competitor assessment you produce will be contaminated by it, your analysis of the market leader will be too generous, your analysis of the underdog will be too dismissive, and your analysis of yourself will be somewhere in between, calibrated less to evidence than to whatever story your recent performance is telling.

The Spotlight Effect: Overestimating How Much Competitors Think About You

In 2000, psychologist Thomas Gilovich and his colleagues published a landmark study in the Journal of Personality and Social Psychology demonstrating what they called the spotlight effect: people systematically overestimate the degree to which others notice and evaluate their actions and appearance. In one experiment, participants forced to wear an embarrassing T-shirt estimated that roughly half of the people in the room had noticed the shirt. The actual number was closer to twenty-five percent. People anchor on their own vivid experience, the self-consciousness of wearing the shirt, and fail to adjust for the fact that other people have their own concerns, their own agendas, and their own embarrassing T-shirts to worry about.

The spotlight effect translates directly into competitive strategy. Founders systematically overestimate how much competitors are thinking about them, and that overestimation distorts decisions.

When a competitor launches a feature that overlaps with your roadmap, the instinct is to interpret it as a response, they saw our traction in that segment and moved to counter it. When a competitor adjusts pricing, the instinct is to interpret it as a strategic move in your direction, they're trying to undercut us. In reality, the competitor is almost certainly responding to their own internal pressures (their board, their burn rate, their churn data, their product team's backlog) and your company barely registered in their decision.

The cost of this misperception is reactive strategy. Teams that overestimate competitor attention spend disproportionate energy on defensive moves (matching features, countering pricing, responding to launches) instead of building from their own thesis. They play the game their competitor is playing instead of the one they could win. The spotlight effect turns competitive analysis from a strategic tool into a reactive loop, where every competitor action demands a response because the team assumes every competitor action was, in some way, aimed at them.

The Graveyard of Companies That Analyzed the Wrong Threat

The most expensive failures in competitive analysis don't come from ignoring competitors. They come from studying the wrong ones with extraordinary rigor.

Kodak is the textbook case. In 1975, Kodak engineer Steve Sasson built the world's first digital camera: a toaster-sized device that captured images at 0.01 megapixels and stored them on a cassette tape. When Sasson presented the prototype to management, the response was a mixture of curiosity and unease. The technology threatened Kodak's most profitable business: film. Rather than viewing digital photography as an opportunity, leadership classified it as a threat to be managed. They buried the technology, protected the existing business, and continued conducting competitive analyses focused on Fujifilm, Agfa, and the other companies competing in the chemical film market.

For twenty years, Kodak's competitive analysis was thorough, sophisticated, and aimed at precisely the wrong terrain. They tracked Fujifilm's pricing, monitored Agfa's distribution, evaluated every incremental improvement in film chemistry. Meanwhile, the existential threat, digital imaging, was advancing on a trajectory that Kodak itself had initiated. By the time the company acknowledged that digital was the future and not a niche, the market had already been claimed by companies that had never manufactured a single roll of film. Kodak filed for bankruptcy in 2012, holding more than a thousand digital imaging patents it had been too late to commercialize.

Sears followed an identical pattern in retail. For decades, Sears conducted exhaustive competitive analyses of JCPenney, Kmart, and the other department stores it viewed as its peer set. The analyses were sophisticated, market share data, customer demographics, pricing comparisons, store-level performance metrics. While Sears was studying department stores, Amazon was building infrastructure. While Sears tracked JCPenney's quarterly comps, Amazon was investing billions in logistics, data, and a customer experience that had nothing to do with physical retail. By the time Sears expanded its competitive frame to include e-commerce, its competitive position was already unrecoverable. The company filed for bankruptcy in 2018.

Borders did the same, analyzing Barnes & Noble while Amazon built the Kindle. Blockbuster analyzed Hollywood Video while Netflix built a recommendation engine and a streaming platform. In every case, the competitive analysis was rigorous, well-resourced, and aimed at the competitors that were easiest to see, because they competed on the same dimensions, in the same channels, for the same customers. The disruptive threats competed on different dimensions entirely, and the existing analytical framework had no way to surface them.

Andy Grove recognized this pattern earlier than most. In his 1996 book Only the Paranoid Survive, Grove described what he called "strategic inflection points", moments when a 10X change in one of six competitive forces (competition, technology, customers, suppliers, complementors, or regulation) completely reshapes the business terrain. Grove's central argument was that traditional competitive analysis fails at precisely these moments, because the data is necessarily about the past while inflection points are about the future. "You need to be able to argue with the data," Grove wrote, "when your experience and judgment suggest the emergence of a force that may be too small to show up in the analysis but has the potential to grow so big as to change the rules of the game."

Grove learned this firsthand. In the mid-1980s, Intel was a memory chip company being systematically destroyed by Japanese manufacturers who had driven Intel's DRAM market share from 82.9 percent in 1974 to 1.3 percent in 1984. Profits collapsed from $198 million to less than $2 million in a single year. Intel's competitive analysis of the memory market was impeccable, and completely useless, because the strategic answer wasn't to compete better in memory. It was to leave memory entirely and bet the company on microprocessors. Grove's famous conversation with Gordon Moore, "If we got kicked out and the board brought in a new CEO, what do you think he would do?", led to a pivot that required firing 8,000 employees, spending $180 million on restructuring, and three years of painful transition. It also transformed Intel into the most valuable semiconductor company in the world.

The lesson isn't that competitive analysis is useless. The lesson is that the more rigorous your analysis of the current competitive terrain, the harder it is to see the terrain that's forming just outside your analytical frame.

Try This: The Debiased Competitive Analysis

Most competitive analyses are confirmation machines, they start with what you believe and end with data that supports it. The following protocol is designed to disrupt that cycle at each stage where bias typically enters.

Step 1: Kill Your Competitive Set. Before you analyze anyone, write down the three to five companies you consider your primary competitors. Now set that list aside. It's contaminated. It reflects your current framing of the market, which is precisely what needs to be tested. Instead, ask three questions: (a) If our company didn't exist, what would our customers use instead, and not just direct substitutes, but the non-obvious alternatives, including doing nothing? (b) What company outside our industry could enter our market in eighteen months if they chose to? (c) What is the job our customer is hiring us to do, and who else, in any industry, in any form, is competing for that job? The answers to these questions will generate a competitive set that looks different from the one you started with. That difference is the analytical gap your existing analysis was missing.

Step 2: Assign a Red Team. Borrow from military intelligence. A red team's job is to argue the opposing case, not because they believe it, but because the exercise forces information into the room that consensus would otherwise suppress. For competitive analysis, the red team's assignment is specific: take each competitor, including the non-obvious ones from Step 1, and build the strongest possible case that this competitor will dominate your market within three years. What would have to be true? What are they doing that you're underestimating? What structural advantage do they have that your analysis is dismissing? Research on devil's advocacy in managerial decision-making shows that structured dissent significantly improves decision quality. A study cited by McKinsey found that high-quality debate, where opposing viewpoints were genuinely argued rather than performatively acknowledged, led to decisions that were 2.3 times more likely to be successful.

Step 3: Invert the Halo. For every competitor you've classified as weak, write a one-page brief explaining why they might actually be stronger than you think. For every competitor you've classified as strong, write a one-page brief explaining what could cause them to collapse. The goal isn't accuracy, it's destabilization. The halo effect creates a false sense of certainty about competitor quality, and the only way to disrupt it is to force yourself to argue the opposite case with the same rigor you applied to the original assessment. If you can't make a credible argument that your "weak" competitor is dangerous, you haven't analyzed them, you've dismissed them. That's the Borders-analyzing-Barnes-&-Noble-while-Amazon-builds-the-Kindle problem, and it doesn't announce itself until it's too late.

Step 4: Run the Pre-Mortem on Your Framework. Gary Klein's pre-mortem asks teams to imagine a decision has failed. Adapt it for competitive analysis: "It's two years from now, and a competitor we never seriously evaluated has taken thirty percent of our market. Who was it, and why didn't our analysis catch them?" This question forces the team to think about the boundaries of their analytical frame rather than the data within it. Research by Mitchell, Russo, and Pennington found that prospective hindsight, imagining an outcome has already occurred, increases the ability to identify potential problems by roughly thirty percent. Applied to competitive analysis, the pre-mortem doesn't improve the analysis of existing competitors. It surfaces the competitors you forgot to analyze.

Step 5: Schedule the Destruction. Competitive analysis isn't an event. It's a practice. The biases described in this post don't get fixed once and stay fixed. Confirmation bias reasserts itself every time you collect new data. The halo effect reasserts itself every time a competitor reports earnings. The spotlight effect reasserts itself every time a competitor launches a feature. Schedule a quarterly review where the explicit agenda is: What has changed in our competitive frame? Not what has changed in our competitors' products or pricing, that's incremental. What has changed about who competes with us, on what dimensions, for which customers? That question is the one that catches strategic inflection points before they arrive.


BlackBerry's analysts were right about the iPhone's battery life. They were right about the keyboard. They were right about every single metric they chose to evaluate. And they lost everything: not because their data was bad, but because their frame was wrong, and every piece of accurate data they collected made them more confident in a picture that was about to become obsolete.

The danger of competitive analysis isn't that you'll get it wrong. It's that you'll get it right, on the dimensions that don't matter, about the competitors that aren't the real threat, using a framework that was built for the terrain that existed six months ago. Survivorship bias hides the companies that did everything right and still lost. Blue ocean strategy argues for making the competition irrelevant entirely. This post is about the step that comes before both: seeing the competition clearly enough to know which game you're actually playing.

Andy Grove spent three years and $180 million learning that the competitive analysis Intel had been running for a decade (thorough, data-driven, and focused on Japanese memory manufacturers) was a map of a country that no longer existed. The pivot to microprocessors wasn't the result of better competitive analysis. It was the result of admitting that the existing analysis, no matter how rigorous, was answering the wrong question.

Your competitive analysis is probably answering the wrong question too. Not because you're careless. Because you're careful, and care, without the right frame, is just a more sophisticated way of confirming what you already believe.


FAQ

What is competitive analysis and why does it often fail? Competitive analysis is the systematic evaluation of competitors' strengths, weaknesses, strategies, and market positions to inform your own strategic decisions. It often fails not because the analysis is poorly executed, but because the framing decisions that precede it (who counts as a competitor, which dimensions to evaluate, what data to collect) are contaminated by confirmation bias, the halo effect, and the spotlight effect. These biases cause teams to study the competitors they can already see, on the dimensions where they already win, and miss the threats that exist outside their analytical frame. BlackBerry's analysis of the iPhone was technically accurate on every metric it measured; it simply measured the wrong metrics.

How does confirmation bias affect competitive intelligence? Confirmation bias causes analysts to seek, interpret, and remember information that confirms existing beliefs about competitors. In practice, this means competitive analyses tend to evaluate competitors on dimensions where the analyzing company already excels, dismiss positive signals about rivals as anomalous, and overweight negative signals about rivals as representative. Research by Harvard Business School's Max Bazerman found that firms implementing debiasing strategies saw an average seven percent increase in return on assets, suggesting the scale of what unaddressed confirmation bias costs in strategic decision-making.

What is the halo effect in business and how does it distort competitor evaluation? The halo effect, as described by Phil Rosenzweig in his book of the same name, is the tendency to let a company's overall performance color the assessment of every individual attribute. When a competitor is performing well, analysts attribute that success to brilliant strategy, strong culture, and visionary leadership. When a competitor is struggling, the same attributes are reframed as weaknesses. This creates systematic overestimation of market leaders and systematic underestimation of underdogs: which is precisely the pattern that allowed Amazon, Netflix, and Android to be dismissed by incumbents whose competitive analyses were rigorous but halo-contaminated.

What is a strategic inflection point and how does it relate to competitive analysis? A strategic inflection point, as defined by Intel CEO Andy Grove in Only the Paranoid Survive, is a moment when a 10X change in competition, technology, customers, suppliers, complementors, or regulation completely transforms the playing field. Traditional competitive analysis fails at these moments because it relies on historical data about existing competitors, while inflection points introduce entirely new competitive dynamics. Grove's central insight is that leaders must learn to argue with their own data when experience suggests an emerging force that is too small to appear in current analysis but has the potential to rewrite the rules of the industry.

How can I build a competitive analysis that accounts for cognitive bias? Start by killing your existing competitive set and rebuilding it from scratch using three questions: What would customers use if we didn't exist? Who outside our industry could enter our market? What job are customers hiring us for, and who else competes for that job? Then assign a red team to build the strongest case for each competitor's dominance, invert the halo by arguing the opposite of your current assessment of each rival, run a pre-mortem on your analytical framework itself, and schedule quarterly reviews focused not on competitor actions but on whether your competitive frame still describes the actual terrain. The goal is not better analysis of the competitors you already see, it's surfacing the competitors you don't.

Works Cited

  • Gilovich, T., Medvec, V. H., & Savitsky, K. (2000). "The Spotlight Effect in Social Judgment: An Egocentric Bias in Estimates of the Salience of One's Own Actions and Appearance." Journal of Personality and Social Psychology, 78(2), 211–222. https://doi.org/10.1037/0022-3514.78.2.211
  • Rosenzweig, P. (2007). The Halo Effect: ... and the Eight Other Business Delusions That Deceive Managers. Free Press.
  • Grove, A. S. (1996). Only the Paranoid Survive: How to Exploit the Crisis Points That Challenge Every Company. Currency Doubleday.
  • Porter, M. E. (1979). "How Competitive Forces Shape Strategy." Harvard Business Review, 57(2), 137–145.
  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • Bazerman, M. H., & Moore, D. A. (2013). Judgment in Managerial Decision Making (8th ed.). Wiley.
  • McNish, J., & Silcoff, S. (2015). Losing the Signal: The Untold Story Behind the Extraordinary Rise and Spectacular Fall of BlackBerry. Flatiron Books.
  • Lucas, H. C., & Goh, J. M. (2009). "Disruptive Technology: How Kodak Missed the Digital Photography Revolution." The Journal of Strategic Information Systems, 18(1), 46–55. https://doi.org/10.1016/j.jsis.2009.01.002
  • Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). "Back to the Future: Temporal Perspective in the Explanation of Events." Journal of Behavioral Decision Making, 2(1), 25–38. https://doi.org/10.1002/bdm.3960020103
  • Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review, September 2007. https://hbr.org/2007/09/performing-a-project-premortem
  • Morewedge, C. K., et al. (2015). "Debiasing Decisions: Improved Decision Making With a Single Training Intervention." Policy Insights from the Behavioral and Brain Sciences, 2(1), 129–140. https://doi.org/10.1177/2372732215600886

Reading won't build your business.

The strategies in this post work — but only if you use them. Inside The Launch Pad, you get the frameworks, the feedback, and the accountability to actually execute.

Build Your Exit