In 1981, a Kodak executive named Vince Barabba did something unusual for a company sitting on top of the world. He told the truth.
Barabba ran market intelligence for Eastman Kodak, and one of the company's largest retail photo finishers had come to him with a question: should we be worried about digital photography? The request triggered an extensive research effort. Barabba's team analyzed adoption curves, equipment costs, image quality trajectories, and component interoperability. They modeled the rate at which digital would improve and the rate at which consumer behavior would shift. The study was thorough, rigorous, and expensive.
The conclusions were clear. Digital photography had the potential to replace Kodak's entire film-based business. But the company had roughly ten years to prepare for the transition. The technology wasn't ready yet. The window was open. The threat was real, distant, and survivable if Kodak moved.
The study landed on the desks of Kodak's senior leadership. They read it. They understood it. And then they did almost nothing with it.
This wasn't ignorance. Kodak had already seen the technology firsthand. Six years earlier, in 1975, a young engineer named Steve Sasson had built the world's first digital camera in a Kodak lab. When Sasson demonstrated it to management, the response, as he later recounted to the New York Times, was: "That's cute, but don't tell anyone about it." In 1986, Kodak developed the first megapixel sensor. In 1989, when CEO Colby Chandler retired, the board chose Kay Whitmore over Phil Samper. Samper had championed digital. Whitmore promised to keep Kodak "closer to its core businesses in film and photographic chemicals." In 1996, Kodak poured $500 million into a hybrid film-digital product called Advantix Preview. It flopped. George Fisher, another CEO, later acknowledged the core problem: Kodak "regarded digital photography as the enemy, an evil juggernaut that would kill the chemical-based film and paper business."
What makes Kodak's story relevant to every founder reading this comes down to a single uncomfortable truth. Kodak didn't fail at analysis. The Barabba study was a textbook strategic assessment. It identified the right strengths (dominant market position, massive distribution, deep expertise in imaging). It identified the right weakness (total dependency on chemical-based film). It identified the right opportunity (a decade-long window to lead the digital transition). And it identified the right threat (the eventual obsolescence of its core product). By any reasonable standard, Kodak ran a SWOT analysis and got every quadrant correct.
Then the biases ate it alive.
The SWOT analysis is the most widely used strategic planning framework in business. Search volume alone tells the story: over 110,000 people search for it every month. It is taught in every MBA program, used in every corporate strategy session, and recommended in every startup playbook. And in its standard form, it is almost perfectly designed to confirm whatever the people in the room already believe. The problem isn't the framework. The problem is the brain running the framework. Each of the four quadrants is vulnerable to a specific set of cognitive biases that distort the inputs before the analysis even begins. Understanding which biases corrupt which quadrants is the difference between a SWOT that reveals the truth and a SWOT that buries it under a layer of comfortable agreement.
Where Did SWOT Come From?
The origin story matters because it reveals something the framework's users have mostly forgotten.
Between 1960 and 1970, a team of researchers at the Stanford Research Institute set out to answer a deceptively simple question: why does corporate planning fail? The project was funded by Fortune 500 companies that were spending heavily on strategic planning and getting poor results. Robert Franklin Stewart directed the research group. The team included Marion Dosher, Dr. Otis Benepe, Birger Lie, and a management consultant named Albert Humphrey.
Over the course of the decade, the team interviewed representatives from over 1,100 organizations. More than 5,000 executives completed a 250-item questionnaire. The findings pointed to a consistent problem: a 35 percent gap between organizational potential and actual execution. Companies weren't failing because they lacked strategy. They were failing because the process of creating strategy was corrupted by the same dynamics it was supposed to illuminate.
Humphrey's team developed a framework to address this gap. They originally called it SOFT analysis: Satisfactory (what's good now), Opportunity (what could be good later), Fault (what's bad now), and Threat (what could be bad later). In 1964, Humphrey presented SOFT analysis at a seminar on long-range planning at the Dolder Grand hotel in Zurich. Two attendees, Urick and Orr, proposed changing "Fault" to "Weakness." The acronym became SWOT, and the framework spread.
What got lost in the spread was the context. Humphrey's team hadn't built SWOT as a standalone exercise. It was embedded in a larger system called the Team Action Model, which included participative planning, management commitment protocols, and what amounted to structured disagreement. The framework was designed to be used within a process that actively counteracted groupthink. By the time SWOT reached the average conference room, the process had been stripped away. What remained was a two-by-two grid and a set of prompts that, without the surrounding architecture, function as an invitation for cognitive biases to organize themselves neatly under professional-looking headers.
The Four Quadrants, The Four Corruptions
Each quadrant of a SWOT analysis is vulnerable to a different family of biases. The corruption isn't random. It's predictable, and it follows the same pattern in boardrooms, startup war rooms, and MBA classrooms.
Strengths: The Halo Effect and the Endowment Effect
When a team lists its strengths, two biases immediately go to work. The halo effect, first identified by psychologist Edward Thorndike in 1920, causes a single positive attribute to color the perception of everything else. A company with strong brand recognition will list "great product" as a strength even when the product is mediocre, because the glow of the brand extends to adjacent categories. A startup with a charismatic founder will list "strong leadership" even when the founder has never managed more than three people. The halo doesn't require evidence. It just needs one bright spot to illuminate the rest.
The endowment effect compounds this. Richard Thaler, Daniel Kahneman, and Jack Knetsch demonstrated in a landmark 1990 experiment at Cornell that people value things they own roughly twice as much as things they don't. Students given coffee mugs demanded an average of $5.25 to sell them. Students without mugs were willing to pay only $2.25 to $2.75 to buy them. Same mug. Double the price, simply because of ownership.
In a SWOT session, the endowment effect means that every internal capability, every existing product, every current relationship gets inflated simply because it's yours. The napkin version: your strengths column isn't a list of what you're good at. It's a list of what you own, wearing a costume.
Weaknesses: Ego Protection and Blind Spots
The weaknesses quadrant should be where the hardest truths land. Instead, it is where organizational ego performs its most sophisticated defense. The biases here are subtler because they operate through omission rather than distortion. You don't see teams writing false weaknesses. You see them writing small, safe, already-known weaknesses that function as a kind of strategic throat-clearing: "We need to improve our onboarding." "Our documentation could be stronger." "We're understaffed in engineering."
These are weaknesses in the way that "I work too hard" is a weakness in a job interview. They are true enough to feel honest and small enough to feel manageable. The real weaknesses, the structural ones that threaten the business, are precisely the ones that ego-protective cognition is designed to keep out of conscious awareness. Psychologist Daniel Goleman's work on self-awareness has shown that blind spots are not areas of ignorance. They are areas of active avoidance. The brain doesn't fail to see the weakness. It succeeds in not looking at it.
At Nokia, this mechanism operated at industrial scale. Researchers Timo Vuori and Quy Huy at Aalto University studied Nokia's collapse from 2005 to 2010 and found that middle managers, afraid of punitive reactions from senior leadership, systematically suppressed negative information. They gave optimistic reports. They compared Nokia's future products to competitors' past products to make the numbers look favorable. Top management, deprived of accurate data, developed what Vuori and Huy described as "an overly optimistic perception of their organization's technological capabilities." The weakness existed. The people closest to it knew about it. And the organizational dynamics of a SWOT-style assessment ensured it never reached the quadrant where it belonged.
Opportunities: Optimism Bias and the Availability Heuristic
The opportunities quadrant is where the optimism bias does its best work. Tali Sharot's research at University College London has shown that the brain updates beliefs nearly 50 percent more efficiently for good news than for bad news. When your team brainstorms opportunities, every suggestion arrives pre-amplified. The brain is literally better at processing the upside than the downside, which means the opportunities list will always be longer, more detailed, and more enthusiastic than the threats list, regardless of the actual environment.
The availability heuristic adds fuel. Whatever opportunity was most recently discussed in a podcast, featured in a case study, or demonstrated by a competitor will feel like the biggest opportunity, because it's the easiest to recall. This is how every SWOT session in 2023 listed "AI integration" as an opportunity whether the company was a SaaS platform or a dog-walking service. Not because AI was universally relevant. Because it was universally available to memory.
Threats: Normalcy Bias and the Ostrich Effect
The threats quadrant is where the most dangerous bias lives, because it operates by shrinking the very category it's supposed to fill. Normalcy bias is the tendency to believe that because something has never happened before, it won't happen now. It is the reason Kodak's leadership could read Barabba's study, understand its conclusions, and still believe the film business would endure. It is the reason Blockbuster CEO John Antioco could sit across from Netflix founders Reed Hastings and Marc Randolph in September 2000, listen to their pitch, and struggle not to laugh. Netflix had $36 million in revenue. Blockbuster had $5 billion and 4,300 stores. The normalcy bias said: the present is the future, just more of it.
The ostrich effect, documented by researchers Niklas Karlsson, George Loewenstein, and Duane Seppi, describes the tendency to avoid monitoring negative information. In their research, investors checked their portfolios significantly less frequently during market downturns. They didn't want to see the bad news. In a SWOT session, the ostrich effect means that the people most qualified to identify threats are often the least willing to look for them, because looking for threats means imagining your own failure, and the brain would very much prefer not to do that.
How Nokia's "SWOT" Missed the Signal That Was Already Inside the Building
The Nokia story deserves its own section because it demonstrates every quadrant failure operating simultaneously.
By 2005, Nokia's top management knew Apple was developing a smartphone. They had assembled intelligence on iOS. They knew Google was building Android. CEO Olli-Pekka Kallasvuo stated publicly in 2008 that Nokia aimed to "act less like a traditional manufacturer, and more like an Internet company," and named Apple, Google, and Microsoft as Nokia's new competitors. The threat was not hidden. It was stated in press releases.
But Vuori and Huy's research, published in Administrative Science Quarterly, revealed the internal reality. Middle managers assessed competitors through a biased lens, consistently comparing Nokia's future products against competitors' past products. This made Nokia always look ahead. The halo effect of Nokia's market dominance (40 percent global market share in 2007) colored the strengths assessment. The endowment effect inflated the value of Symbian, Nokia's operating system, because it was theirs. Ego protection suppressed reports about Symbian's technical limitations. The optimism bias amplified the opportunity to build touch-screen phones quickly, even when engineers warned the timelines were unrealistic. And normalcy bias neutralized the threat assessment. Apple and Google, the reasoning went, were "inexperienced in the phone business." The past would continue.
Nokia lost the smartphone race not because its strategic assessment was missing. It lost because every quadrant of that assessment was being filtered through biases that the assessment process itself was designed to surface but had no mechanism to counteract.
The second napkin version: a SWOT analysis without debiasing is a group of people writing down their assumptions and calling it strategy.
Try This: The Debiased SWOT Protocol
The standard SWOT is broken not because the categories are wrong but because the process invites the biases rather than challenging them. Here is a version that fights back.
Start with threats, not strengths. Every conventional SWOT begins with strengths, which primes the room with optimism and makes the subsequent quadrants feel less urgent. Flip the order. Begin with threats. This exploits what psychologists call a framing effect: the first topic discussed receives the most cognitive energy and the least fatigue-driven shortcutting. Threats deserve that energy.
Use anonymous written input before any group discussion. Research on groupthink shows that when strategy sessions use anonymous input, the number of unique opinions expressed roughly doubles compared to open discussion. Give every participant ten minutes to fill out all four quadrants independently, in writing, before a single word is spoken aloud. Collect the responses. Read them without attribution. This neutralizes the hierarchy effects that silenced Nokia's middle managers.
Run a pre-mortem on each quadrant. Gary Klein's pre-mortem technique, which Deborah Mitchell, Jay Russo, and Nancy Pennington's research showed increases the ability to identify reasons for failure by 30 percent, can be adapted quadrant by quadrant. For strengths: "Imagine a competitor just proved this strength is actually a weakness. How did they do it?" For weaknesses: "Imagine this weakness just killed the company. What happened?" For opportunities: "Imagine we pursued this opportunity and it destroyed value. Why?" For threats: "Imagine we ignored this threat and it was the one that ended us. What did we miss?" The pre-mortem works because it shifts the brain from evaluating possibility (where optimism bias dominates) to explaining an outcome that has already occurred (where the bias is weakest).
Assign a Red Team. The concept originates from a practice Pope Sixtus V formalized in 1587, when the Vatican appointed an advocatus diaboli to argue against candidates for sainthood. The Prussian military adopted the principle for war games, using red pieces for the opposing force. In a debiased SWOT, the Red Team's job is to attack every item in every quadrant. Strengths are challenged: "Is this actually a strength, or is the halo effect inflating a single positive attribute?" Weaknesses are deepened: "Is this the real weakness, or is this the safe weakness you listed to avoid naming the real one?" The Red Team doesn't need to be right. It needs to create the friction that the standard process eliminates.
Require evidence thresholds. No item enters any quadrant without a supporting data point that is independent of the team's own experience. "We have great customer service" requires a specific NPS score, a retention metric, or third-party review data. "AI is an opportunity" requires a specific use case with a projected ROI. This single rule eliminates roughly half the items in a typical SWOT and replaces them with entries that have survived at least one encounter with reality.
Kodak's Vince Barabba got the analysis right. Every quadrant was accurate. The strengths were real, the weaknesses were real, the opportunities were real, and the threats were existential. Twenty years later, the company filed for bankruptcy. Not because the framework failed, but because the framework was never designed to overcome the thing that actually kills strategies: the predictable, measurable, neurologically grounded biases that corrupt every input before the grid is even drawn.
The four-quadrant grid is fine. The categories are useful. But a SWOT analysis without debiasing is a mirror that only shows you what you want to see. The confirmation bias that shapes which evidence reaches the room, the first principles thinking required to question the assumptions underneath each quadrant, the competitive advantage that exists only if you see your position clearly enough to defend it: none of these survive a process that lets the brain's default settings run unchecked.
Albert Humphrey's original framework included the architecture to fight back. Somewhere between Zurich in 1964 and your last strategy offsite, that architecture got lost. Now you know where to find it.
FAQ
What is a SWOT analysis and how do you use it?
A SWOT analysis is a strategic planning framework that organizes information into four categories: Strengths (internal advantages), Weaknesses (internal disadvantages), Opportunities (external favorable conditions), and Threats (external unfavorable conditions). It was developed from research at the Stanford Research Institute between 1960 and 1970, led by Albert Humphrey and Robert Stewart. To use it effectively, teams should gather data on each quadrant, but the standard process is vulnerable to cognitive biases in every category. A debiased version that includes anonymous input, pre-mortems, and evidence thresholds produces significantly more accurate results than the conventional brainstorming approach.
What cognitive biases affect SWOT analysis?
Each quadrant is vulnerable to specific biases. Strengths are inflated by the halo effect (one positive trait colors everything) and the endowment effect (people overvalue what they own by roughly 2x, per Kahneman, Knetsch, and Thaler's 1990 research). Weaknesses are minimized by ego-protective cognition and organizational blind spots. Opportunities are amplified by the optimism bias (the brain processes good news nearly 50 percent more efficiently than bad news) and the availability heuristic (recent or vivid opportunities feel larger than they are). Threats are diminished by normalcy bias (the assumption that the present will continue) and the ostrich effect (the tendency to avoid monitoring negative information).
How do you debias a SWOT analysis?
Five structural changes improve accuracy. First, start with threats instead of strengths to allocate maximum cognitive energy to the most dangerous quadrant. Second, collect anonymous written input before group discussion, which research shows roughly doubles the number of unique perspectives expressed. Third, run Gary Klein's pre-mortem technique on each quadrant by asking teams to imagine each item has already caused a failure and explain why. Fourth, assign a Red Team to challenge every entry. Fifth, require independent evidence for every item, eliminating entries that cannot survive contact with data outside the team's own experience.
What is the origin of SWOT analysis?
SWOT analysis originated from a research project at the Stanford Research Institute from 1960 to 1970, funded by Fortune 500 companies seeking to understand why corporate planning was failing. The team, including Albert Humphrey, Robert Stewart, Marion Dosher, Dr. Otis Benepe, and Birger Lie, interviewed over 1,100 organizations and surveyed more than 5,000 executives. They originally called the framework SOFT analysis (Satisfactory, Opportunity, Fault, Threat). In 1964, at a seminar at the Dolder Grand hotel in Zurich, attendees Urick and Orr suggested changing "Fault" to "Weakness," creating the SWOT acronym. Heinz Weihrich later developed the matrix format that became standard by the 1980s.
Why did Kodak fail despite identifying the digital threat?
Kodak's 1981 market intelligence study, led by Vince Barabba, accurately identified digital photography as an existential threat and gave the company a ten-year window to prepare. But cognitive biases at the leadership level prevented action. The endowment effect caused executives to overvalue their chemical film business. Normalcy bias led them to believe the film market would persist. Ego protection suppressed internal advocacy for digital. When CEO Colby Chandler retired in 1989, the board chose the candidate who promised to stay close to film over the one championing digital transformation. Kodak filed for bankruptcy in 2012, more than thirty years after it had the right analysis and the right timeline.
Works Cited
-
Barabba, V. (2011). The Decision Loom: A Design for Interactive Decision-Making in Organizations. Triarchy Press.
-
Humphrey, A. S. (2005). "SWOT Analysis for Management Consulting." SRI Alumni Association Newsletter, December 2005.
-
Vuori, T. O., & Huy, Q. N. (2016). "Distributed Attention and Shared Emotions in the Innovation Process: How Nokia Lost the Smartphone Battle." Administrative Science Quarterly, 61(1), 9-51. https://doi.org/10.1177/0001839215606951
-
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1990). "Experimental Tests of the Endowment Effect and the Coase Theorem." Journal of Political Economy, 98(6), 1325-1348. https://doi.org/10.1086/261737
-
Thorndike, E. L. (1920). "A Constant Error in Psychological Ratings." Journal of Applied Psychology, 4(1), 25-29. https://doi.org/10.1037/h0071663
-
Sharot, T., Korn, C. W., & Dolan, R. J. (2011). "How Unrealistic Optimism Is Maintained in the Face of Reality." Nature Neuroscience, 14(11), 1475-1479. https://doi.org/10.1038/nn.2949
-
Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review, September 2007. https://hbr.org/2007/09/performing-a-project-premortem
-
Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). "Back to the Future: Temporal Perspective in the Explanation of Events." Journal of Behavioral Decision Making, 2(1), 25-38. https://doi.org/10.1002/bdm.3960020103
-
Karlsson, N., Loewenstein, G., & Seppi, D. (2009). "The Ostrich Effect: Selective Attention to Information." Journal of Risk and Uncertainty, 38(2), 95-115. https://doi.org/10.1007/s11166-009-9060-6
-
Mui, C. (2012). "How Kodak Failed." Forbes. https://www.forbes.com/sites/chunkamui/2012/01/18/how-kodak-failed/