In 2008, David Gilboa left his Prada glasses on a plane while backpacking through Thailand. He was between jobs, about to start his MBA at Wharton, and the replacement quote came back at $700. More than the iPhone he'd just bought. For a product that was, materially speaking, a few grams of acetate and two polycarbonate lenses.
He spent his entire first semester at Wharton squinting at whiteboards and complaining to anyone who would listen. One of the people who listened was Neil Blumenthal. Before Wharton, Blumenthal had spent five years as the director of VisionSpring, a nonprofit that distributed affordable eyeglasses in developing countries. He knew something that most people outside the manufacturing supply chain didn't: a pair of glasses that retailed for $300 in the United States cost between $5 and $15 to produce. Not a simplified version. The same materials, often from the same factories.
The gap between $15 and $300 had nothing to do with quality. It came down to a single company. Luxottica controlled the brands (Ray-Ban, Oakley, Persol, Oliver Peoples), the retail chains (LensCrafters, Sunglass Hut, Pearle Vision, Target Optical), and even the vision insurance plan (EyeMed) that decided which frames your benefits covered. They owned the product, the store, and the gatekeeper. First principles thinking is the practice of decomposing a problem to its most fundamental truths and reasoning upward from there, rather than accepting the existing structure as a starting point. Blumenthal and Gilboa, along with co-founders Andrew Hunt and Jeffrey Raider, didn't ask "how do we compete with Luxottica?" That would have been reasoning by analogy, copying the existing industry's structure and trying to do it slightly better. They asked a different question: what does a pair of glasses actually need to cost? That willingness to challenge assumptions about what already exists is the same instinct that separates second-movers who see what incumbents miss from first-movers who accept the industry's framing. The answer was $95, including prescription lenses. They called the company Warby Parker, launched in February 2010, and hit their entire first-year sales target in three weeks.
Why Does Your Brain Default to Copying What Already Exists?
You do it constantly and you almost never notice. You're building a pricing page, so you look at what competitors charge. You're designing an onboarding flow, so you model it on apps you've used. Launch strategy? You follow the playbook someone else published. This is reasoning by analogy, and your brain loves it for a simple reason: it's cheap.
The cognitive scientist Dedre Gentner formalized this in 1983 with what she called structure-mapping theory. When the brain encounters an unfamiliar problem, it searches for a familiar domain that shares a similar relational structure and maps the solution from one to the other. You don't need to understand why the solution works. You just need to recognize that the situations look alike. Gentner showed that the brain preferentially maps relational structure over surface features, which makes analogy powerful when the deep structure genuinely matches and dangerous when it only looks like it does.
Daniel Kahneman's framework explains the economics. Analogical reasoning runs on System 1, the fast, automatic, low-effort processing mode. The brain generates candidate analogies without conscious effort, the way a search engine returns results before you finish typing. This is the same default that makes brainstorming sessions converge on familiar ideas rather than producing genuine novelty. First principles reasoning demands System 2: slow, deliberate, metabolically expensive. You have to suppress the default analogical response, hold multiple abstract elements in working memory simultaneously, and reason through terrain where no prior solution provides scaffolding.
The brain consumes roughly 20 percent of the body's total metabolic energy despite being about 2 percent of its mass. System 2 thinking burns through that budget faster. Given a choice between "find something that looks similar and copy it" and "decompose this into its fundamental components and build a solution from scratch," the brain will choose the copy every time unless you deliberately override it.
Most businesses look like slightly modified versions of other businesses. That's not a failure of imagination. The brain's energy conservation system is doing exactly what it evolved to do: find the cheapest adequate solution and move on.
The Neural Signature of a Breakthrough
In 2004, Mark Jung-Beeman and John Kounios ran one of the most elegant experiments in the neuroscience of thinking. They gave participants compound remote associate problems (given three words like "pine," "crab," and "sauce," find one word that connects all three; answer: "apple") and tracked their brains using fMRI in one experiment and EEG in another. After each solution, participants reported whether they experienced a sudden "Aha!" moment or arrived at the answer through deliberate, step-by-step analysis.
The insight solutions showed a distinctive neural signature. A burst of high-frequency gamma-band activity, roughly 40 Hz, erupted over the right anterior superior temporal gyrus about 0.3 seconds before participants consciously experienced the solution. The answer arrived in the brain before the person knew they had it. And it came from a specific region, the right aSTG, that specializes in making connections between distantly related concepts. The same area activates when you understand a joke or grasp a metaphor.
In a follow-up study, Kounios and Beeman discovered something even more striking about what happened before the insight. A preparatory brain state predicted whether a given problem would be solved with insight or with analysis. In the moments before insight solutions, participants showed reduced activity in the visual cortex. They were literally looking inward, suppressing external input so the brain could search its own associative networks for non-obvious connections. The insight wasn't a random lightning bolt. It was a prepared mind reaching a threshold.
Adam Green and colleagues found the complementary piece. Using fMRI, they showed that the left frontopolar cortex works harder as analogies become more distant, spanning larger semantic gaps between domains. Activity in this region tracked parametrically with semantic distance: the further apart the concepts being connected, the more the frontopolar cortex had to work. Creative leaps across domains aren't free. They have a measurable neural cost, and that cost is paid by some of the most metabolically demanding tissue in the brain.
First principles thinking isn't a single cognitive operation. It's a sequence: suppress the brain's default pattern-completion machinery, hold the decomposed elements in working memory, and let the associative networks search for connections that the conventional framing would have blocked. The reason it feels hard is that it is hard. You're asking the most expensive neural hardware you own to override the cheapest.
The Thumbtack Box That Nobody Could See
In 1945, the psychologist Karl Duncker published an experiment that still appears in every cognitive psychology textbook. He gave participants a candle, a box of thumbtacks, and a book of matches, and asked them to attach the candle to the wall so it could burn without dripping wax on the table.
Most people tried to tack the candle directly to the wall. Some tried to melt it onto the wall with a match. Almost nobody saw the solution: empty the thumbtack box, tack the box to the wall as a shelf, and place the candle inside it.
Duncker called this functional fixedness. The thumbtack box was presented as a container for tacks, so participants could only see it as a container for tacks. Its conventional function blocked them from reconceiving it as raw material for a different purpose. When Duncker ran the same experiment with the tacks sitting outside the box (the box presented empty), participants were twice as likely to solve it. Removing the object's functional context freed participants to see it as what it actually was: a small, flat, tackable platform.
Sam Glucksberg, a Princeton psychologist, replicated the experiment in 1962 and added a twist: he offered some participants cash rewards for solving it faster. The incentivized group performed worse, not better. Money narrowed their attention, reinforcing the fixedness rather than breaking it. Daniel Pink cited this result prominently in Drive. It became a staple of creativity research.
Then it failed to replicate.
In 2013, Joachim Ramm, Sigve Tjotta, and Gaute Torsvik ran Glucksberg's incentive experiment again and found that monetary rewards left performance unaltered. No improvement, no harm. The dramatic finding that money kills creativity on this specific task didn't hold up. The functional fixedness itself is robust. The claim about incentives making it worse is not.
The developmental research adds an unexpected angle. Tim German and Merideth Defeyter published a study in Psychonomic Bulletin & Review in 2000 showing that five-year-olds are immune to functional fixedness. Show a five-year-old a box being used as a container, and they'll repurpose it without hesitation. Six- and seven-year-olds start showing the effect. Functional fixedness is not a flaw in the hardware. It's a learned optimization. The brain learns that things have canonical functions because, most of the time, respecting those functions is efficient. The cost is that it becomes progressively harder to see objects, and business models, and pricing structures, and industry norms, as decomposable raw material.
Blumenthal saw a thumbtack box. He'd spent five years in developing countries watching the same lenses and the same acetate get assembled into glasses that cost a few dollars. When he arrived at Wharton and discovered those same materials were selling for $300 in Manhattan, he wasn't seeing a luxury product. He was seeing a markup with a brand name on it. Everyone else in the industry saw "eyewear," trapped by the same curse of knowledge that makes creators overvalue their own perspective. He saw the raw inputs.
When Does First Principles Thinking Make Things Worse?
The survivorship problem is obvious once you see it. We tell stories about Warby Parker and other breakout companies because they succeeded. We don't hear about the thousands of entrepreneurs who also reasoned from first principles, built something novel, and failed because their foundational assumptions were incomplete, or the market didn't care about a theoretically optimal solution, or execution was the bottleneck rather than conceptual insight.
A case study from Commoncog illustrates the failure mode precisely. A company with an engineering office in Vietnam analyzed Singapore's point-of-sale systems market from first principles and concluded there was no money in it: low margins, high competition, commoditized hardware. Their reasoning was internally flawless. But they had missed a single fact: Singaporean government productivity grants were subsidizing adoption for companies on an approved list, making the market highly profitable for anyone who knew about the subsidy. Their first-principles analysis was worse than analogical reasoning would have been, because analogical reasoning (looking at what competitors were doing and asking "why are they all doing this?") would have surfaced the subsidy immediately.
First principles reasoning is only as good as the principles you start from. Miss one critical input and the entire chain of deduction produces a confident, well-reasoned, wrong conclusion.
Gary Klein, a research psychologist who spent the late 1980s studying firefighters and military personnel for the U.S. Army Research Institute, found something that complicates the entire first-principles narrative, and connects directly to the OODA loop and satisficing frameworks used to break analysis paralysis. Expert decision-makers in time-pressured environments don't decompose problems into fundamental truths. They recognize situation types based on pattern matching with prior experience, mentally simulate a course of action, and execute the first adequate option they find. Klein's recognition-primed decision model showed that experienced fire commanders make life-or-death calls in seconds by recognizing that this fire behaves like the one on Oak Street three years ago. They don't decompose "what is combustion?" They act on accumulated pattern recognition, and in domains with reliable, repeating feedback, that pattern recognition is remarkably accurate.
Philip Tetlock's nearly twenty-year forecasting study, tracking 284 experts making 28,000 predictions, identified a related failure pattern. Tetlock borrowed Isaiah Berlin's distinction between hedgehogs (who organize the world through one big idea) and foxes (who draw from many frameworks and remain comfortable with ambiguity). Foxes substantially outperformed hedgehogs, especially on long-term predictions within the hedgehog's own area of expertise. The connection to first principles thinking is direct: a first-principles thinker who falls in love with the elegance of their own decomposition, who stops testing their conclusions against evidence, falls into the same confirmation bias that blinded Kodak and Blockbuster. They become a hedgehog. Confident. Internally consistent. And no more accurate than a dart-throwing chimpanzee.
The honest framework: first principles thinking works best in domains that are systematically inefficient, where incumbents have stopped questioning their own assumptions, and where the thinker has access to the actual foundational inputs. It fails when the domain has reliable patterns that experience can decode, when critical information is hidden, or when the thinker mistakes their decomposition for reality and stops checking.
Blumenthal had an unfair advantage that most first-principles stories conveniently omit. He didn't just theorize about manufacturing costs from a whiteboard. He had five years of hands-on experience watching glasses get made for dollars in factories overseas. His first principles weren't abstract. They were empirical. That distinction matters more than any framework.
Try This: The Assumption Decomposition
Most businesses are built on inherited assumptions that nobody examines because nobody remembers choosing them. This exercise takes thirty minutes and consistently surfaces at least one assumption that is costing you money or opportunity.
-
Pick one element of your business model and write down why it works the way it does. Your pricing structure. Your sales process. Your delivery method. Your hiring criteria. Write out the reasoning, the full chain from "we do it this way because..." Keep going until you hit the bottom. You'll know you've reached a genuine first principle when the answer is a physical, economic, or mathematical constraint rather than "because that's how the industry does it."
-
Identify every "because that's how it's done" in your chain. These are the analogical links, the places where you copied a conclusion from your industry, your competitor, or your last company without examining the foundation underneath it. Circle each one. They are assumptions, not truths.
-
For each circled assumption, ask: what would I do if this weren't true? If customers didn't need a free trial. If delivery didn't require physical inventory. If your sales cycle didn't require three meetings. You're not brainstorming fantasy scenarios. You're testing whether the assumption reflects a genuine constraint or an inherited habit. The Warby Parker question wasn't "how do we compete with Luxottica?" It was "what does a pair of glasses actually need to cost?"
-
Run the Klein check. For each assumption you're considering overturning, ask whether you're operating in a domain with reliable, repeating patterns (where expert intuition and analogy are likely trustworthy) or a domain that is novel, rapidly changing, or systematically inefficient (where first principles decomposition is more likely to surface something the incumbents missed). Firefighting is a good domain for pattern recognition. An industry where one company controls the product, the retail, and the insurance approval is a good domain for first principles.
-
Test one assumption this week. Don't overhaul your business model on a whiteboard exercise. Pick the assumption that would be cheapest and fastest to test if it turned out to be wrong, and design a small experiment. The test isn't "was my first-principles analysis elegant?" The test is "did customers, revenue, or operations actually change when I removed this assumption?"
Warby Parker's founders told a Wharton pricing professor about their plan to sell glasses online for $95. The professor told them it was impossible, that it would never work. The professor was reasoning by analogy. Glasses cost $300 because glasses cost $300. The retail markup, the brand licensing, the vertically integrated monopoly that controlled every link from factory to face: these weren't features of the product. They were features of the industry's structure. Blumenthal knew the actual cost of the product because he'd held it in his hands in a factory overseas. He could see the raw inputs because he'd already stripped away the functional fixedness that made everyone else see "eyewear" instead of "acetate and polycarbonate."
Chapter 1 of Wired explains the prediction engine that makes this kind of blindness possible: the system that builds models of the world, generates expectations before experience arrives, and then treats those expectations as reality until something forces an update. That chapter starts with a patient who can't recognize his own mother's face, and it reveals something about your brain that will change how you think about every assumption you've never thought to question.
FAQ
What is first principles thinking? First principles thinking is a reasoning method that decomposes a problem into its most fundamental truths, things that cannot be reduced further, and builds solutions upward from those truths rather than reasoning by analogy with existing solutions. The concept traces to Aristotle's Metaphysics, where he defined a first principle as "the first basis from which a thing is known." In practice, it means questioning inherited assumptions until you reach physical, economic, or mathematical constraints, then reasoning from those constraints rather than from industry convention.
Why is first principles thinking so hard? Analogical reasoning runs on the brain's fast, automatic System 1 processing, which requires minimal cognitive effort. First principles thinking requires System 2: slow, deliberate, metabolically expensive processing that demands sustained working memory, suppression of default pattern-completion, and abstract reasoning in the anterior prefrontal cortex. Neuroscience research shows that creative insights involve gamma-band activity in the right anterior superior temporal gyrus and increased frontopolar cortex activation that scales with the semantic distance being bridged. The brain defaults to analogy because it is dramatically cheaper.
When should you NOT use first principles thinking? In domains with reliable, repeating patterns and rapid feedback, expert pattern recognition outperforms first-principles decomposition. Gary Klein's research on firefighters showed that experienced commanders make accurate life-or-death decisions in seconds through recognition-primed decision-making, not by decomposing problems from scratch. Philip Tetlock's nearly 20-year study of 284 experts found that thinkers who organize the world through one big framework (hedgehogs) make worse predictions than those who draw from many frameworks (foxes). First principles thinking is most valuable in novel domains, systematically inefficient industries, or situations where existing analogies are misleading.
What is functional fixedness and how does it relate to first principles? Functional fixedness is a cognitive bias discovered by Karl Duncker in 1945 where people can only see an object in terms of its conventional function. In his classic candle problem, participants failed to see a thumbtack box as a potential shelf because it was presented as a container. Research by German and Defeyter (2000) showed that five-year-olds are immune to this effect, suggesting functional fixedness is a learned optimization, not a hardware limitation. First principles thinking is the deliberate override of functional fixedness applied to business problems: seeing industry norms, pricing structures, and business models as decomposable raw material rather than fixed categories.
Works Cited
-
Gentner, D. (1983). "Structure-Mapping: A Theoretical Framework for Analogy." Cognitive Science, 7(2), 155-170.
-
Jung-Beeman, M., Bowden, E. M., Haberman, J., Frymiare, J. L., Arambel-Liu, S., Greenblatt, R., Reber, P. J., & Kounios, J. (2004). "Neural Activity When People Solve Verbal Problems with Insight." PLoS Biology, 2(4), e97.
-
Kounios, J., Frymiare, J. L., Bowden, E. M., Fleck, J. I., Subramaniam, K., Parrish, T. B., & Jung-Beeman, M. (2006). "The Prepared Mind: Neural Activity Prior to Problem Presentation Predicts Subsequent Solution by Sudden Insight." Psychological Science, 17(10), 882-890.
-
Kounios, J., & Beeman, M. (2009). "The Aha! Moment: The Cognitive Neuroscience of Insight." Current Directions in Psychological Science, 18(4), 210-216.
-
Green, A. E., Kraemer, D. J. M., Fugelsang, J. A., Gray, J. R., & Dunbar, K. N. (2010). "Connecting Long Distance: Semantic Distance in Analogical Reasoning Modulates Frontopolar Cortex Activity." Cerebral Cortex, 20(1), 70-76.
-
Duncker, K. (1945). "On Problem-Solving." Psychological Monographs, 58(5), Whole No. 270.
-
German, T. P., & Defeyter, M. A. (2000). "Immunity to Functional Fixedness in Young Children." Psychonomic Bulletin & Review, 7(4), 707-712.
-
Ramm, J., Tjotta, S., & Torsvik, G. (2013). "Incentives and Creativity in Groups." CESifo Working Paper No. 4374.
-
Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
-
Klein, G. (1998). Sources of Power: How People Make Decisions. MIT Press.
-
Tetlock, P. E. (2005). Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press.