Decision-Making & Psychology

The Planning Fallacy: Why Every Project Takes Longer Than You Think

In 1957, the government of New South Wales announced an international design competition for a new performing arts center on Bennelong Point in Sydney Harbour. Two hundred and thirty-three entries arrived from thirty-two countries. The winner was a thirty-eight-year-old Danish architect named Jorn Utzon, whose submission was little more than a set of preliminary sketches: bold, sweeping shell-like forms that looked unlike anything ever built.

The official estimate: four years of construction, seven million Australian dollars.

The government was so eager to begin that they authorized demolition of the existing tram depot in 1958 and broke ground in March 1959, before Utzon had completed his structural drawings. The reasoning was political. Wait too long and public enthusiasm might cool. Wait too long and funding might evaporate. Better to start now and solve the engineering problems as they arose.

The engineering problems did not cooperate. The signature roof shells that had won the competition existed as beautiful curves on paper, but no one had figured out how to actually build them. Three years of structural analysis produced no viable solution. Utzon eventually arrived at a design based on sections of a single sphere, but the delay had already cascaded through the entire project. Costs mounted. Timelines stretched. The government changed the blueprint requirements from two theatres to four. Arguments between Utzon and government officials escalated until, in 1966, the architect resigned and left Australia with his family. He never returned to see his building completed.

The Sydney Opera House opened on October 20, 1973. Sixteen years after the competition. The final cost: 102 million Australian dollars, more than fourteen times the original estimate. The Guinness Book of World Records would later cite it as the largest proportional cost increase for a building project ever recorded.

If you've ever quoted a client three weeks and delivered in seven, estimated a product launch for Q2 and shipped in Q4, or told your co-founder that the MVP would take two months and found yourself still debugging four months later, you've experienced the same phenomenon, on a smaller scale. And if you assumed the problem was poor planning, bad luck, or insufficient effort, you were wrong. The problem is a cognitive bias so reliable that it has a name, a fifty-year research history, and a single correction technique that most people have never heard of.

What Is the Planning Fallacy?

In 1979, Daniel Kahneman and Amos Tversky published a paper called "Intuitive Prediction: Biases and Corrective Procedures" that introduced the term planning fallacy. The definition was precise: a systematic tendency to underestimate the time, costs, and risks of future actions while simultaneously overestimating their benefits. Not occasionally. Not under certain conditions. Systematically, meaning the error goes in the same direction nearly every time, and knowing about the error doesn't fix it.

The theoretical engine behind the planning fallacy is what Kahneman and Tversky called the inside view versus the outside view.

The inside view is what happens naturally when you think about your project. You picture the specific steps. You imagine the sequence unfolding. You mentally simulate the work: first we'll build this, then we'll integrate that, then we'll test, then we'll launch. The inside view feels detailed and therefore feels accurate. You're thinking about your project, with your team, in your specific circumstances.

The outside view asks a different question entirely: what happened when other people tried something like this? Not your project specifically, but the category your project belongs to. How long did similar products take to build? What percentage of comparable launches shipped on time? What was the average cost overrun for projects of this type and scale?

The inside view generates a narrative. The outside view generates a statistic. And the consistent finding, across decades of research, is that the statistic is more accurate than the narrative, and that people almost never use it.

Kahneman experienced this himself. In the late 1970s, he was part of a team developing a new curriculum and textbook for teaching decision-making in Israeli high schools. After about a year of productive work, he asked each team member to independently estimate how long the project would take to complete. The estimates clustered around two years. Feeling good about their progress, Kahneman then turned to Seymour Fox, a curriculum development expert on the team, and asked a different question: of teams similar to ours who have attempted projects like this, how long did they take?

Fox went quiet. He hadn't thought about it that way before. After reflection, he reported that roughly forty percent of comparable teams never finished at all. The teams that did finish had taken seven to ten years. And he rated Kahneman's team as slightly below average in terms of resources and skills.

The room absorbed this. They had their inside view (two years, good progress, talented team) and their outside view (seven to ten years if they finished, forty percent chance they wouldn't). Every person in the room understood the implications of the data Fox had just provided. They continued with the project anyway. The textbook was completed eight years later.

This is the planning fallacy's signature move. The data doesn't help. Knowing about the bias doesn't correct the bias. Understanding that you're taking the inside view doesn't switch you to the outside view. The inside view feels like thinking. The outside view feels like giving up.

The Numbers Don't Lie (But You Will)

In 1994, psychologists Roger Buehler, Dale Griffin, and Michael Ross designed the experiment that made the planning fallacy undeniable in individual behavior.

They asked university students to predict when they would complete their senior thesis projects. The researchers didn't just ask for a single estimate. They asked for three: the date by which students thought there was a fifty percent chance they'd be done, a seventy-five percent chance, and a ninety-nine percent chance. That last number was, in effect, the students' worst-case scenario, the date by which they were virtually certain the work would be finished.

The results were stark. Only thirteen percent of students finished by their fifty percent estimate. Only nineteen percent finished by their seventy-five percent estimate. And the ninety-nine percent probability level, the date that should have captured all but the most extreme outliers, caught fewer than half. Forty-five percent of students blew through their own worst-case deadline.

The researchers found three mechanisms driving the pattern. First, people underestimate their own completion times but not other people's. When asked to predict how long a classmate would take, students were significantly more accurate. The inside view is self-specific. You can see other people's projects clearly because you're automatically taking the outside view on them.

Second, when generating predictions, people focus on plan-based scenarios rather than past experience. They imagine how the work will unfold going forward rather than remembering how similar work actually unfolded in the past. The mental simulation feels informative. It feels like analysis. But it's fiction, a story about a future that hasn't happened, constructed by a brain that systematically edits out the delays, interruptions, and complications that characterized every previous project.

Third, when people do recall past experiences, they attribute the delays to exceptional circumstances. The last project took longer because the requirements changed. Or because a team member left. Or because the API was poorly documented. Each delay had a specific, identifiable cause, and therefore, the reasoning goes, has no bearing on the current project, which won't have those specific causes. This is the inside view defending itself. Every delay is unique, so no delay is predictable, so the optimistic estimate survives contact with contradictory evidence.

Bent Flyvbjerg, a Danish economic geographer at Oxford, took the planning fallacy from the psychology lab to the infrastructure site. Starting in the early 2000s, he began building what would become the largest database of megaproject performance ever assembled, more than sixteen thousand projects across twenty nations, worth hundreds of billions of dollars.

His findings were blunt. Nine out of ten megaprojects exceed their budgets. The average cost overrun for rail projects is 44.7 percent. For bridges and tunnels, thirty-four percent. For dams, ninety-six percent. IT projects are among the worst offenders, with some of the most extreme cost overruns of any project category. Flyvbjerg called it the Iron Law of Megaprojects: "Over budget, over time, under benefits, over and over again."

The pattern is so consistent that Flyvbjerg concluded it couldn't be explained by bad luck or incompetence alone. Something systematic was at work. That something was the inside view, operating at institutional scale: thousands of project managers, each one convinced that their project was different, each one generating a plan-based estimate that ignored the statistical base rate of what actually happens when humans build things.

Why Founders Are the Worst Offenders

If the planning fallacy affects everyone, founders get it worst. The same psychological traits that make someone willing to start a company (optimism, conviction, tolerance for uncertainty) are precisely the traits that amplify the bias.

Start with optimism bias, the broader tendency to overestimate positive outcomes and underestimate negative ones. Research consistently shows that entrepreneurs score higher on optimism measures than the general population. This isn't incidental to founding a company. It's a prerequisite. No one starts a startup while giving full weight to the base rate of failure. The act of founding is, in a sense, an act of dismissing the outside view. If you took the outside view seriously (ninety percent of startups fail, most products take twice as long as estimated, most markets are smaller than founders believe), you'd never start.

But the optimism that gets you started is the same optimism that corrupts your timeline estimates. When a founder says "we'll have the MVP in eight weeks," the brain that generated that number is the same brain that believed the company would work in the first place. It's not estimating. It's narrating. It's constructing a story about how the next eight weeks will unfold, and the story has the same optimistic protagonist as the founding narrative.

Then add emotional investment. Kahneman and Tversky's inside view is amplified when the project isn't just something you're managing; it's something you conceived, believe in, and have staked your identity on. The sunk cost mechanism compounds this: the more time and money you've poured into a plan, the harder it becomes to revise the plan's assumptions, including the assumption about when it will be done. Admitting the timeline was wrong feels like admitting the vision was wrong. So the timeline persists, and when it breaks, the founder attributes the delay to circumstances rather than to the estimate itself.

Finally, consider the social pressure. Founders make timeline commitments to investors, customers, partners, and team members. Each commitment creates an anchor, a stated expectation that makes revision feel like failure rather than recalibration. The investor who was told "Q3 launch" doesn't want to hear "actually Q1 next year." The team that was told "two-week sprint" doesn't want to hear "we need four." So founders learn to make aggressive estimates and then work backward from the commitment, which means they're not planning anymore. They're wishing.

The result is what any founder recognizes: the perpetual state of being "almost done." The launch is always two weeks away. The feature is always ninety percent complete. The fundraise is always "just a few more conversations." Each of these feels true in the moment. The inside view is vivid and specific and persuasive. And each of these has been true in the moment for the last three months.

Reference Class Forecasting: The One Technique That Works

In 2003, Bent Flyvbjerg and his colleague Carne Glenting developed a method specifically designed to counteract the planning fallacy. They called it reference class forecasting, and it was based directly on Kahneman and Tversky's distinction between the inside and outside view.

The method has three steps. They are deceptively simple.

Step one: Identify a reference class. Find a set of past projects that are genuinely similar to yours. Not identical — no project is identical. Similar in type, scale, and complexity. If you're building a mobile app with a four-person team, your reference class is other mobile apps built by small teams. If you're launching a SaaS product in a competitive market, your reference class is other SaaS launches in competitive markets. The key is specificity without uniqueness. Your project is a member of a class. Find the class.

Step two: Establish the distribution. For the reference class you've identified, gather data on actual outcomes. How long did those projects take? What did they cost? What was the typical overrun? You don't need a single number. You need a distribution, a range showing where most outcomes cluster and how far the tails extend. If you can find data showing that similar apps took between four and nine months, with a median of six and an average of seven, that distribution is more information than any amount of inside-view planning will produce.

Step three: Position your project within the distribution. Based on the specific characteristics of your project, where does it fall? If your team is more experienced than average, you might position yourself slightly below the median. If you're using unfamiliar technology, slightly above. The critical discipline is that you're adjusting from the base rate, not generating a number from scratch. The outside view is the starting point. The inside view is the adjustment. Most people do the opposite: they start with the inside view and never consult the base rate at all.

Flyvbjerg's data showed that reference class forecasting dramatically improved accuracy. The UK Department for Transport adopted it in 2004 for all large transport projects after Flyvbjerg demonstrated that conventional forecasting methods were essentially uncorrelated with actual outcomes. The method doesn't eliminate overruns. It shrinks them. It replaces the comfortable fiction of the inside view with the uncomfortable statistics of what actually happens, and uncomfortable statistics make better budgets than comfortable fictions.

The power of the technique isn't mathematical. It's psychological. Reference class forecasting forces the outside view into a process that would otherwise exclude it entirely. Left to their own devices, planners will always generate an inside-view estimate. The three-step method doesn't ask them to stop doing that. It asks them to check their estimate against reality before committing to it.

Try This: The Base Rate Budget

You don't need a database of sixteen thousand megaprojects. You need the discipline to ask one question before committing to any timeline: what happened last time?

Here's a protocol you can run before your next product launch, feature build, fundraise, or hire.

  1. Write down your inside-view estimate first. Don't censor it. Whatever your gut says ("six weeks," "Q3," "end of month"), write it down. This is your baseline, and you'll need it for comparison.

  2. Identify your reference class. List three to five past projects of similar type and scope. These can be your own past projects or projects from other companies you have data on. If you're estimating a feature build, look at the last five features your team shipped. If you're estimating a fundraise, ask three founders who've raised a similar round how long it took. The reference class doesn't have to be perfect. It has to be real.

  3. Record the actual outcomes. For each project in your reference class, note how long it actually took, not how long it was supposed to take. If you estimated four weeks and it took seven, the data point is seven. If you can't find actual completion data, that itself is information: you've been operating without an outside view entirely.

  4. Calculate the median and the range. What was the typical actual outcome? What was the best case? What was the worst? Your inside-view estimate is almost certainly closer to the best case than the median. The median is your new estimate. The range tells you how much uncertainty to budget for.

  5. Apply the multiplier. If the median actual outcome in your reference class is 1.8x the original estimate, apply that multiplier to your inside-view estimate. Six weeks becomes roughly eleven. "Q3" becomes "probably Q4." This feels pessimistic. That feeling is the inside view objecting to the outside view. The feeling is not evidence. The data is evidence.

  6. Communicate the range, not the point estimate. Tell your investors, your team, and your customers a range: "Based on similar past projects, this will likely take eight to fourteen weeks, with a most likely completion around week eleven." Ranges feel less certain. They are less certain. They are also more honest, and honest estimates build more trust than optimistic estimates that miss.

Run this protocol once and you'll be startled by the gap between your inside-view estimate and the base rate. Run it consistently and you'll notice something strange: your inside-view estimates start adjusting on their own. The outside view, once consulted regularly, begins to infiltrate the narrative. Your gut still generates a number. But the number gets closer to reality, because the gut has been trained on better data.

This is the same principle behind first principles thinking, stripping away assumptions and building from verified foundations. The planning fallacy is, at its core, a failure to verify the foundation. The foundation of every timeline estimate is the assumption that this project will go roughly as planned. Reference class forecasting tests that assumption against the only evidence that matters: what actually happened when other people assumed the same thing.


The Sydney Opera House was supposed to take four years. It took sixteen. Kahneman's textbook was supposed to take two years. It took eight. Buehler's students were ninety-nine percent certain they'd meet their deadline. Fewer than half did. Flyvbjerg's database of sixteen thousand projects shows that nine out of ten exceed their budgets, and the pattern holds across every industry, every country, and every decade he's studied.

The planning fallacy isn't a failure of effort or intelligence. It's a feature of how the human brain constructs predictions: by simulating a specific future rather than consulting the statistical past. The inside view feels like thinking. The outside view feels like surrendering. And that asymmetry is why the bias survives contact with the data, survives awareness of itself, and survives the memory of every previous project that took longer than you thought it would.

You cannot think your way out of the inside view. But you can build a process that forces the outside view into the room before commitments are made. Reference class forecasting isn't pessimism. It's the one form of optimism that's earned — the kind that's calibrated to what actually happens, not what you hope will happen.

The next time you catch yourself saying "it should only take a few weeks," stop. Ask: what happened last time? And if the answer makes you uncomfortable, that discomfort is the outside view arriving. Don't fight it. It's the most useful feeling you'll have all quarter.

The neuroscience of why your brain defaults to the inside view (why plan-based thinking feels more real than base-rate thinking, and why the narrative always defeats the statistic until you build structural defenses against it) is one of the core mechanisms covered in Wired. If you've ever watched a timeline disintegrate in real time while everyone involved kept saying "we're almost there," that chapter explains what was happening in every brain in the room.


FAQ

What is the planning fallacy and why does it affect founders? The planning fallacy is the systematic tendency to underestimate the time, costs, and risks of future actions while overestimating their benefits. It was identified by Daniel Kahneman and Amos Tversky in 1979. Founders are especially vulnerable because the same traits that drive entrepreneurship (optimism, conviction, emotional investment) are precisely the traits that amplify the bias. The act of founding a company is, in a sense, an act of dismissing the outside view. That same dismissal corrupts every timeline estimate the founder produces.

What is the difference between the inside view and the outside view? The inside view focuses on the specific details of your project: the steps, the plan, the unique circumstances. It generates a narrative about how the work will unfold. The outside view asks what happened when other people tried something similar and generates a statistic based on actual outcomes. Research consistently shows the outside view is more accurate, but people almost never use it because the inside view feels more detailed and therefore more trustworthy.

What is reference class forecasting and how do I use it? Reference class forecasting is a three-step technique developed by Bent Flyvbjerg based on Kahneman and Tversky's work. First, identify a reference class of similar past projects. Second, gather data on their actual outcomes to establish a distribution. Third, position your project within that distribution based on its specific characteristics. The critical discipline is starting from the base rate and adjusting, rather than generating an estimate from scratch using only the inside view.

How far off are most project estimates? Flyvbjerg's database of over sixteen thousand megaprojects shows that nine out of ten exceed their budgets. Rail projects average 44.7 percent cost overruns. IT projects that exceed fifty percent overrun average a staggering 447 percent. In Buehler, Griffin, and Ross's study, fewer than half of university students completed their thesis by the deadline they'd assigned a ninety-nine percent probability. The pattern is consistent across industries, countries, and decades.

How does the planning fallacy relate to sunk costs and analysis paralysis? The planning fallacy often feeds directly into sunk cost reasoning. When a project takes longer than estimated, founders have already invested more time and money than planned, which makes it psychologically harder to abandon or revise the approach. The planning fallacy creates the overrun; sunk cost bias locks you into it. Meanwhile, analysis paralysis can arise when founders overcorrect, becoming so aware of estimation errors that they struggle to commit to any timeline at all. The solution isn't to stop planning. It's to plan using the outside view.

Works Cited

  • Kahneman, D., & Tversky, A. (1979). "Intuitive Prediction: Biases and Corrective Procedures." TIMS Studies in Management Science, 12, 313–327.
  • Buehler, R., Griffin, D., & Ross, M. (1994). "Exploring the 'Planning Fallacy': Why People Underestimate Their Task Completion Times." Journal of Personality and Social Psychology, 67(3), 366–381. https://doi.org/10.1037/0022-3514.67.3.366
  • Flyvbjerg, B. (2003). "How Common and How Large Are Cost Overruns in Transport Infrastructure Projects?" Transport Reviews, 23(1), 71–88. https://doi.org/10.1080/01441640309904
  • Flyvbjerg, B. (2006). "From Nobel Prize to Project Management: Getting Risks Right." Project Management Journal, 37(3), 5–15.
  • Flyvbjerg, B., & Gardner, D. (2023). How Big Things Get Done: The Surprising Factors That Determine the Fate of Every Project, from Home Renovations to Space Exploration and Everything In Between. Currency.
  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • "Sydney Opera House." Wikipedia. https://en.wikipedia.org/wiki/Sydney_Opera_House

Reading won't build your business.

The strategies in this post work — but only if you use them. Inside The Launch Pad, you get the frameworks, the feedback, and the accountability to actually execute.

Build Your Exit