Decision-Making & Psychology

The Dunning-Kruger Effect Is Ruining Your Hiring (And Your Self-Assessment)

On January 6, 1995, a man named McArthur Wheeler robbed two banks in the Pittsburgh area in broad daylight. No mask. No disguise. He walked into the Swissvale branch of Mellon Bank, pointed a gun at the teller, collected $5,200, and left. Then he drove to Fidelity Savings Bank in Brighton Heights and did it again. The security cameras captured his face clearly both times.

Wheeler wasn't reckless. He had a plan. Before the robberies, he'd rubbed lemon juice all over his face. Lemon juice works as invisible ink on paper, so Wheeler reasoned it would make his face invisible to cameras. He'd even tested the theory by smearing juice on his face and taking a Polaroid. The photo came back blank (likely aimed wrong or overexposed). Confirmation enough. The theory was sound.

When police arrested him on April 20, after his surveillance photo aired on Pittsburgh's Crime Stoppers, Wheeler was genuinely stunned. "But I wore the lemon juice," he told the officers. He wasn't performing confusion. He meant it. The Dunning-Kruger effect is the cognitive bias where people with the least ability in a given domain are the most likely to overestimate their competence, and the least equipped to recognize they're doing it. Wheeler couldn't evaluate his plan because the skills required to see its flaws were the same skills he lacked entirely.

A psychologist at Cornell named David Dunning first encountered Wheeler's story in the 1996 World Almanac, then tracked down a longer account in the Pittsburgh Post-Gazette. It struck him not as a funny crime blotter item but as a research question. What if the inability to recognize incompetence wasn't a personality flaw but a structural feature of how human cognition works? What if the same deficit that makes you bad at something is the deficit that prevents you from knowing you're bad at it?

Four years later, Dunning and his graduate student Justin Kruger published the answer.

What Is the Dunning-Kruger Effect?

Kruger and Dunning ran four studies on Cornell undergraduates, testing self-assessment accuracy across three domains: humor (rating how funny jokes were compared to expert comedian ratings), logical reasoning (a 20-item test), and English grammar. The results were consistent across all three.

Participants who scored in the bottom quartile, averaging around the 12th percentile, estimated their own performance at the 62nd percentile. A fifty-point gap between where they were and where they thought they were. Meanwhile, top-quartile performers, scoring around the 87th percentile, placed themselves around the 70th to 75th percentile. They underestimated, but only by about fifteen points.

The asymmetry was the finding. Low performers didn't just overestimate. They overestimated by a margin that dwarfed the underestimation at the top. Kruger and Dunning called it the "dual burden": the skills you need to produce correct answers are the same skills you need to recognize correct answers. If you lack them, you're doubly cursed. You get it wrong, and you can't tell you got it wrong.

The paper, published in the Journal of Personality and Social Psychology in December 1999, became one of the most cited studies in the history of psychology. It also became one of the most misunderstood.

Did the Dunning-Kruger Effect Actually Replicate?

If you've seen the Dunning-Kruger graph online, you've seen a smooth curve that climbs "Mount Stupid," plunges into the "Valley of Despair," then gradually ascends the "Slope of Enlightenment." That graph has no basis in any published research. It's internet mythology, invented sometime after 2010 and attributed backwards to a study that never produced it.

The real controversy is more interesting.

In 2016, a team led by Ed Nuhfer at California State University did something that should have made the psychology world uncomfortable. They took completely random data. No humans. No test scores. No self-assessments. Just fake numbers generated by a computer. Then they plotted those random numbers using the same graphical method Kruger and Dunning used.

The bottom quarter of the random data "overestimated" by 37.5 percentage points. The pattern looked almost identical to the original Dunning-Kruger graph. With no psychology involved at all.

The reason is mathematical, not psychological. If you divide any dataset into quartiles, the bottom group will average around the 12.5th percentile. If their self-assessments are random (centered near 50), you get automatic "overestimation" at the bottom and "underestimation" at the top. It's what regression to the mean looks like when you plot it a certain way.

Nuhfer then ran the study with real participants, 1,154 people completing science literacy self-assessments. He found that nearly 80 percent of low-scoring participants had relatively accurate self-assessments. The unskilled weren't systematically unaware. Most of them knew.

In 2020, Gilles Gignac and Marcin Zajenkowski pushed further with 929 participants and found that the relationship between actual ability and self-assessed ability was approximately linear, with a modest positive correlation of about 0.30. The degree to which people misjudged their ability was roughly equal across the entire skill spectrum. Their paper's title called the effect "mostly a statistical artefact."

So is the Dunning-Kruger effect dead?

Not exactly. The core empirical finding, that low performers tend to overestimate more than high performers underestimate, does replicate when researchers use independent measures that avoid the statistical confounds. It shows up in medical education, in chess ratings among amateurs, and in detecting fake news. What hasn't survived is the magnitude. The classic graph massively overstates the asymmetry. And the original causal explanation, that incompetence specifically robs self-insight through a metacognitive deficit, remains the most contested part. Many researchers accept the pattern but think simpler explanations cover it: the better-than-average effect (most people rate themselves above average regardless of skill) combined with regression to the mean.

The pop-culture version of the Dunning-Kruger effect, the version that became a meme, vastly overstates the science. It joins a long list of psychological findings that don't hold up the way people think they do. The real version is subtler, smaller, and more interesting.

Why Does Overconfidence Kill Companies?

In 1988, Arnold Cooper, Carolyn Woo, and William Dunkelberg surveyed 2,994 entrepreneurs and asked them to estimate their odds of success. Eighty-one percent rated their chances at 70 percent or higher. Thirty-three percent said their odds were 100 percent. Not "very high." One hundred percent. Absolute certainty.

The base rate for startup survival, depending on which definition you use, runs between 10 and 40 percent.

The gap between what founders believe and what actually happens isn't random noise. A 2022 meta-analysis of 62 studies on entrepreneurial overconfidence, led by Philipp Kraft and colleagues and published in the Journal of Business Venturing, found that overconfidence works in three distinct channels: overestimation (thinking you're better than you are in absolute terms), overplacement (thinking you're better than your competitors), and overprecision (being too certain about what you know). All three types stimulate people to start ventures. All three types impair performance after the venture has been founded.

Nobody talks about this paradox. The bias that gets companies off the ground is the same bias that drives them into the ground.

Webvan is what this looks like at scale. Louis Borders, who'd already built Borders bookstores, recruited George Shaheen away from Andersen Consulting to run an online grocery delivery company. Shaheen had grown Andersen's revenue from $1.1 billion to $8.3 billion. He knew how to build. Webvan raised over $396 million in venture capital, then went public in November 1999 and raised another $375 million. Before proving the unit economics worked in a single market, they placed a $1 billion order with Bechtel to build automated warehouses and planned a simultaneous 26-city expansion, the same ownership escalation pattern that drives founders to build before validating. They lost $830 million and filed for bankruptcy in July 2001.

Or Quibi. Jeffrey Katzenberg, former chairman of Walt Disney Studios, and Meg Whitman, former CEO of eBay and Hewlett-Packard. Two of the most successful executives in American business history. They raised $1.75 billion for a short-form mobile video platform, launched it in April 2020, and shut it down six months later. The target audience was already served by YouTube and TikTok. The two legendary executives assumed their pedigree guaranteed product-market fit. It didn't.

Colin Camerer and Dan Lovallo published a study in the American Economic Review in 1999 that explains the mechanism. They found that overconfident individuals disproportionately enter competitive markets, especially when they can self-select into skill-based competitions. Camerer and Lovallo called it "reference group neglect": entrepreneurs focus on their own capabilities and systematically ignore the capabilities of everyone else in the market. Too many enter, and once there, they persist too long. Excess entry plus excess persistence. Two mistakes compounding each other.

What Happens in Your Brain When You're Overconfident?

In 2016, a large-scale fMRI study published in Social Cognitive and Affective Neuroscience mapped what happens in the brain during metacognitive judgments. Participants made decisions, then rated their confidence in those decisions, while researchers watched the neural activity.

Two findings stood out. First, feeling confident activated reward areas: the bilateral striatum, hippocampus, and motor regions. Feeling uncertain activated the dorsomedial prefrontal cortex and bilateral orbitofrontal cortex, areas linked to negative affect. Confidence feels good. Uncertainty feels bad. Your brain is not neutral about which state you're in.

Second, and this is the part that matters: when the researchers controlled for actual accuracy, confidence was negatively correlated with metacognitive ability. The more confident you felt, the worse you were at distinguishing when you were right from when you were wrong. Confidence wasn't tracking reality. It was generating its own signal, and that signal felt like knowledge.

A separate study published in the Journal of Neuroscience in 2012 by Stephen Fleming and colleagues found that the rostrolateral prefrontal cortex shows greater activity during metacognitive self-report, and that the strength of this activity predicts metacognitive ability across individuals. The Molenberghs team's data also showed that reduced activation in the orbitofrontal cortex predicted overconfidence about incorrect answers. Overconfidence appears to be, at least partly, a failure in the brain's error-detection system. The alarm that should fire when you're wrong stays quiet.

Through the lens of predictive processing, the framework that runs through the neuroscience of Wired, overconfidence is a precision estimation failure. The brain builds internal models of the world and uses prediction errors, the gap between what it expected and what actually happened, to update those models. But the system also estimates how precise its own models are. If you believe your internal model is high-precision, you suppress the prediction errors that would otherwise teach you that you're wrong. You trust your predictions too much and discount the evidence against them, which is confirmation bias operating at the hardware level. The brain isn't lying to you. It's miscalibrating how much to trust itself.

Smart, experienced founders are often the most vulnerable. Past success inflates the precision estimate. You built something that worked. Your internal model produced accurate predictions in that context. The brain generalizes: the model is good. When the context changes and the model no longer applies, the precision estimate doesn't update fast enough. You're still trusting the old map in new terrain.

The Confidence Gap Nobody Talks About

The Dunning-Kruger effect gets all the attention, but its mirror image might be more costly.

In 1978, Pauline Clance and Suzanne Imes published a study in Psychotherapy: Theory, Research and Practice on 150 high-achieving women, faculty members and graduate students with outstanding academic records, awards, and professional accomplishments. Despite the evidence, these women persisted in believing they weren't really bright and had fooled everyone around them. They attributed their success to luck, charm, or the failure of others to see through them. Clance and Imes called it the impostor phenomenon.

Subsequent research found it wasn't limited to women. It shows up across genders and professions, and it has a specific relationship to the Dunning-Kruger pattern. High performers underestimate their competence partly because they're acutely aware of how much they don't know. They see the full complexity of the domain and assume everyone else finds it just as easy. The more you know, the more you know how much you don't know. That awareness, which is a feature of genuine expertise, gets misread as evidence of inadequacy.

For founders, this creates a systematic problem. The people most qualified to start a company and run it well are the people least likely to believe they're ready. The people least qualified are the most certain they are. Overconfident founders enter the market. Competent ones hesitate. The same dynamic plays out inside teams that punish dissent: the loudest voice wins, not the most accurate one. And the market doesn't select for the best. It selects for the most certain.

Try This: The Calibration Audit

Most entrepreneurs suffer from overplacement (thinking they're better than competitors) and overprecision (being too certain about what they know) more than overestimation. Each type requires a different correction.

  1. Run the pre-mortem. This is the same technique used by decision-makers operating under pressure to surface hidden risks. Before committing to a major decision, write one paragraph answering this question: "It's twelve months from now, and this failed. What went wrong?" The exercise forces your brain to generate prediction errors against its own plan. Research by Deborah Mitchell, Jay Russo, and Nancy Pennington found that prospective hindsight, the technique underlying the pre-mortem, increases the ability to correctly identify reasons for future outcomes by 30 percent. You're not being pessimistic. You're stress-testing the precision estimate.

  2. Score your predictions. For the next thirty days, write down three predictions each week about your business: will this customer sign, will this feature ship on time, will this campaign hit its target. At the end of the month, score yourself. Most people discover they're right about 50 to 60 percent of the time while feeling 80 to 90 percent certain. The gap between your felt certainty and your actual accuracy is your calibration error, and seeing the number changes the number.

  3. Seek the disagreement. Identify one person in your network who has relevant experience and will tell you the truth. Ask them one question: "What am I wrong about?" Not "What do you think of my plan?" which invites validation. The specific phrasing matters because it gives the other person permission to be direct. If they can't name anything, they're either not informed enough or not honest enough to be useful.

  4. Separate the three types. When you catch yourself feeling confident, ask which kind: Am I overestimating my own ability? Am I overestimating my advantage over competitors? Or am I too certain about a specific prediction? The interventions are different. Overestimation requires honest self-testing. Overplacement requires studying your competitors as carefully as you study yourself. Overprecision requires tracking your predictions against outcomes.

  5. Check the impostor side. If you read this post and your reaction is "I'm definitely the one who underestimates," consider that this reaction itself might be evidence of competence. The impostor phenomenon is most prevalent in people who actually have the skills. If you've been hesitating on a decision because you don't feel ready, ask: would you advise a friend with your exact experience and knowledge to wait? Usually the answer is no.


McArthur Wheeler received a sentence of 24 and a half years. Judge Gary Lancaster delivered it on January 5, 1996, almost exactly one year after the lemon juice robberies. Wheeler's story became a footnote in criminal history and an origin story in psychology, but the lesson most people take from it is the wrong one. They laugh at Wheeler and assume the Dunning-Kruger effect is about stupid people who don't know they're stupid. It's not. The effect, to the degree that it's real, is about the relationship between skill and self-knowledge in all of us. Every founder who's ever been certain a product would sell before testing it, every CEO who's dismissed a competitor they hadn't studied, every entrepreneur who's confused past success with future certainty is running the same miscalibrated computation Wheeler ran. Just with higher stakes and better lemon juice.

The prediction system that drives overconfidence is the same system that drives every decision your brain makes, from what to eat for lunch to whether to bet your savings on a startup. Chapter 2 of Wired explains how that system actually works: the dopamine circuit that fires not when you get a reward, but when you predict one, and why that distinction changes everything about how you evaluate risk, opportunity, and your own competence. The part that will unsettle you most starts with a monkey, a juice dispenser, and the most misunderstood molecule in neuroscience.


FAQ

What is the Dunning-Kruger effect? The Dunning-Kruger effect is the cognitive bias where people with the least skill in a given domain tend to overestimate their competence by the largest margin. In the original 1999 study by Justin Kruger and David Dunning at Cornell, bottom-quartile performers estimated themselves at the 62nd percentile while actually scoring at the 12th. The proposed mechanism is a "dual burden": the skills needed to perform well are the same skills needed to recognize poor performance.

Has the Dunning-Kruger effect been debunked? Partially. The core finding, that low performers overestimate more than high performers underestimate, replicates in cognitive tasks with proper methodology. But a 2017 study by Ed Nuhfer showed that the classic Dunning-Kruger graph can be reproduced with completely random data, and a 2020 study by Gignac and Zajenkowski called the effect "mostly a statistical artefact." The popular internet version with "Mount Stupid" and the "Valley of Despair" has no basis in any published research.

How does overconfidence affect entrepreneurs? A survey of 2,994 entrepreneurs found that 81 percent rated their success odds at 70 percent or higher, and 33 percent said 100 percent, despite actual startup survival rates of 10 to 40 percent. A 2022 meta-analysis of 62 studies found that overconfidence stimulates venture creation but impairs performance after founding. The same bias that helps entrepreneurs start companies hurts them once they're running one.

What is the opposite of the Dunning-Kruger effect? The impostor phenomenon, identified by Pauline Clance and Suzanne Imes in 1978, describes high-achieving individuals who believe they've fooled everyone and don't deserve their success. It's related to the Dunning-Kruger pattern: experts underestimate their competence because they're acutely aware of domain complexity and assume others find it equally easy. The result is a systematic confidence gap where the most qualified people are the least likely to act on their abilities.

Works Cited

  • Kruger, J., & Dunning, D. (1999). "Unskilled and Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-Assessments." Journal of Personality and Social Psychology, 77(6), 1121-1134. https://psycnet.apa.org/record/1999-15054-002

  • Nuhfer, E., Fleisher, S., Cogan, C., Wirth, K., & Gaze, E. (2017). "How Random Noise and a Graphical Convention Subverted Behavioral Scientists' Explanations of Self-Assessment Data." Numeracy, 10(1), Article 4. https://digitalcommons.usf.edu/numeracy/vol10/iss1/art4/

  • Gignac, G. E., & Zajenkowski, M. (2020). "The Dunning-Kruger Effect Is (Mostly) a Statistical Artefact." Intelligence, 80. https://www.sciencedirect.com/science/article/abs/pii/S0160289620300271

  • Cooper, A. C., Woo, C. Y., & Dunkelberg, W. C. (1988). "Entrepreneurs' Perceived Chances for Success." Journal of Business Venturing, 3(2), 97-108.

  • Kraft, P., Gunther, C., Kammerlander, N., & Lampe, J. (2022). "Overconfidence and Entrepreneurship: A Meta-Analysis of Different Types of Overconfidence in the Entrepreneurial Process." Journal of Business Venturing, 37(4). https://www.sciencedirect.com/science/article/abs/pii/S0883902622000192

  • Camerer, C., & Lovallo, D. (1999). "Overconfidence and Excess Entry: An Experimental Approach." American Economic Review, 89(1), 306-318. https://www.aeaweb.org/articles?id=10.1257/aer.89.1.306

  • Molenberghs, P., Trautwein, F. M., et al. (2016). "Neural Correlates of Metacognitive Ability and of Feeling Confident." Social Cognitive and Affective Neuroscience, 11(12), 1942-1951. https://pmc.ncbi.nlm.nih.gov/articles/PMC5141950/

  • Fleming, S. M., Huijgen, J., & Dolan, R. J. (2012). "Prefrontal Contributions to Metacognition in Perceptual Decision Making." Journal of Neuroscience, 32(18), 6117-6125. https://www.jneurosci.org/content/32/18/6117

  • Clance, P. R., & Imes, S. A. (1978). "The Imposter Phenomenon in High Achieving Women: Dynamics and Therapeutic Intervention." Psychotherapy: Theory, Research and Practice, 15(3), 241-247. https://paulineroseclance.com/pdf/ip_high_achieving_women.pdf

  • Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). "Back to the Future: Temporal Perspective in the Explanation of Events." Journal of Behavioral Decision Making, 2(1), 25-38.

  • Krueger, J., & Mueller, R. A. (2002). "Unskilled, Unaware, or Both? The Better-Than-Average Heuristic and Statistical Regression Predict Errors in Estimates of Own Performance." Journal of Personality and Social Psychology, 82(2), 180-188.


Reading won't build your business.

The strategies in this post work — but only if you use them. Inside The Launch Pad, you get the frameworks, the feedback, and the accountability to actually execute.

Build Your Exit