On the evening of January 27, 1986, a three-way teleconference connected engineers and managers at Morton Thiokol in Utah, NASA's Marshall Space Flight Center in Alabama, and the Kennedy Space Center in Florida. The subject was whether to launch the space shuttle Challenger the following morning. Temperatures at the launch site were forecast to drop to 30 degrees Fahrenheit overnight, and a group of engineers led by Roger Boisjoly had spent the previous six months warning that the rubber O-rings sealing the solid rocket boosters would lose elasticity in cold weather. Boisjoly had written it down in a memo to Morton Thiokol's vice president of engineering, spelling it out in language that left no room for interpretation: the failure to act would result in "a catastrophe of the highest order — the loss of human life."
On that call, the engineers presented thirteen charts recommending against launch. Three separate recommendations, all pointing the same direction: do not fly. Lawrence Mulloy, NASA's solid rocket booster project manager, rejected the data. "My God, Thiokol, when do you want me to launch, April?" he said. The comment shifted the entire tone of the conversation.
Morton Thiokol's vice president of space booster programs, Joe Kilminster, asked for a five-minute offline caucus. During that break, senior executive Jerald Mason turned to Bob Lund, the company's vice president of engineering, and told him to "take off your engineering hat and put on your management hat." The engineers were excluded from the final decision. Four executives voted. The count was four to zero in favor of launch. Kilminster returned to the call and told NASA that the data was "inconclusive" and that Morton Thiokol now recommended proceeding. He signed the fax authorizing launch. Allan McDonald, who directed the booster rocket project at the launch site, refused to sign it.
The next morning, seventy-three seconds after liftoff, the O-ring on the right solid rocket booster failed exactly as Boisjoly had predicted. Challenger broke apart. Seven crew members died. The information that could have prevented the disaster had been in the room the night before, presented clearly, supported by data, argued passionately. It didn't matter. A small group of managers, insulated from dissent and pressured by schedule, overrode the people who knew the most.
The word for what happened in that room has a clinical name. Irving Janis coined it in 1972: groupthink. And while the Challenger disaster is the most frequently cited example, the mechanism behind it is operating in your conference room right now, in ways that neuroscience is only beginning to map.
Groupthink is the tendency for cohesive groups to converge on consensus at the expense of critical evaluation. It doesn't require stupidity or bad intentions. It requires exactly the conditions most teams are designed to create: trust, shared purpose, and the desire to move forward together. The neuroscience reveals something worse than a decision-making flaw. It reveals that the human brain is wired to treat agreement as a reward and disagreement as a threat, and that no amount of intelligence or expertise overrides this circuitry automatically. What follows is the science behind that wiring, the surprisingly thin line between healthy team cohesion and catastrophic groupthink, and a protocol for staying on the right side of it.
The Eight Symptoms That Janis Saw (and the One He Missed)
Irving Janis was a research psychologist at Yale who spent years studying how the smartest rooms in American government produced some of its worst decisions. The Bay of Pigs invasion. The escalation of Vietnam. The failure to anticipate Pearl Harbor. His 1972 book, Victims of Groupthink, laid out a framework that has shaped organizational psychology for half a century.
Janis identified eight symptoms: an illusion of invulnerability, collective rationalization, belief in the group's inherent morality, stereotyped views of outsiders, self-censorship, an illusion of unanimity, direct pressure on dissenters, and the emergence of mindguards who shield the group from contradictory information. Read that list and count how many you've seen in a meeting this month.
What Janis didn't have access to, writing in 1972, was a way to look inside the brain while groupthink was happening. He could describe the symptoms, but he couldn't explain the mechanism. Why does the brain of a perfectly competent engineer sit quiet while a room full of managers overrides his data? Why does an executive at Nokia, sitting on evidence that the iPhone will destroy their market position, nod along with a plan everyone privately knows won't work? Your team is lying to you not because they're cowards, but because the neural cost of dissent is real, measurable, and biologically expensive. The neuroscience that's emerged since Janis tells us exactly how expensive.
What Happens Inside Your Brain When You Disagree With the Room?
In 2009, Vasily Klucharev and his colleagues at the F.C. Donders Center for Cognitive Neuroimaging in the Netherlands put people in an fMRI scanner and asked them to rate the attractiveness of faces. Simple task. No stakes, no consequences. After each rating, participants were shown the average rating from a group of peers.
When the participant's rating matched the group, nothing remarkable happened in the scanner. But when their rating diverged from the group's, two things fired simultaneously. The rostral cingulate zone activated, generating what neuroscientists call a prediction error signal. This is the same region that fires when you make a mistake, when you reach for a cup and it's heavier than expected, when you predict one outcome and get another. The brain registered disagreeing with the group as being wrong in the same neural language it uses for factual errors.
At the same time, activity in the ventral striatum dropped. The ventral striatum is a core node in the brain's reward network, the same circuitry that responds to food, money, and social approval. Disagreeing with the group didn't just trigger an error signal. It withdrew a reward. The brain was simultaneously punishing the person for diverging and withholding the pleasure that comes from fitting in.
Here's the part that should concern anyone who leads a team. The magnitude of these signals predicted what people did next. Participants with the strongest error signal and the sharpest reward drop were the ones who changed their ratings to match the group afterward. The brain wasn't just registering discomfort. It was running a learning algorithm, updating future behavior to avoid the same "mistake." Klucharev described it as addressing "the most fundamental social mistake — that of being too different from others." Conforming isn't a personality flaw. It's reinforcement learning.
Four years earlier, Gregory Berns at Emory University had found something even more disturbing. Berns designed a conformity experiment using a mental rotation task inside an fMRI scanner. In a design inspired by Solomon Asch's classic conformity studies, participants judged whether rotated shapes were the same or different while confederates sometimes gave unanimously wrong answers. In Asch's original 1951 line-matching study, 75 percent of participants conformed to an obviously wrong group answer at least once. The conventional explanation was social compliance: people knew the right answer but went along to avoid standing out.
Berns's brain scans told a different story. When participants conformed, the activity showed up not in decision-making areas but in the occipital and parietal cortex, the regions responsible for spatial and visual perception. The group's opinion hadn't just changed what participants chose to say. It had changed how their brains processed the visual information. Berns also found that when participants resisted the group, the amygdala activated, the brain's alarm center for threat. Going against the room didn't just feel uncomfortable. The brain processed it as danger.
So here's the picture that emerges. When your team member sits in a meeting and hears five people agree with a direction they believe is wrong, their brain is simultaneously generating an error signal (you're wrong), withdrawing a reward (you don't belong), altering their perception (maybe they're right), and triggering a fear response (this is dangerous). That's four systems working against the single cognitive process of maintaining an independent judgment. The wonder isn't that people conform in groups. The wonder is that anyone ever disagrees at all.
Why Would Evolution Build a Brain That Surrenders to the Crowd?
The instinct is to see groupthink as a bug. A design flaw in the neural hardware. But that framing misses the math of human survival.
For roughly 200,000 years of Homo sapiens history, the social group was the unit of survival. A solitary human on the African savanna was a dead human. Hunting required coordination. Defense required numbers. The individual who disagreed with the group's decision about where to hunt or whether a rustling in the grass was a threat might have been occasionally right, but being right and alone was worse than being wrong and together. Natural selection didn't optimize for accuracy. It optimized for cohesion.
This is the paradox. The same neural machinery that creates disastrous consensus in a boardroom also creates the shared mental models that make high-functioning teams possible. Research on team cognition has consistently shown that groups with a shared understanding of their tasks, roles, and goals outperform groups without one. Cohesive groups coordinate faster, communicate more efficiently, and adapt more fluidly. You don't want a team of lone wolves who never converge. That's not a team. That's a collection of individuals sharing a conference room.
The problem isn't cohesion. It's unexamined cohesion. In one state, a team has built a shared model through rigorous debate, stress-testing assumptions, and integrating diverse perspectives. Members converge because the evidence points there. In the other, a team has converged because disagreeing activates a pain circuit, because the first person to speak set an anchor that nobody corrected, because the reward of belonging outweighed the cost of speaking up. The first is alignment. The second is groupthink. They look identical from the outside. By the time you can see the difference, the shuttle has already launched.
Janis saw this duality, even without the neural data. Groupthink doesn't emerge in groups with low cohesion. It emerges in groups where members like each other, trust each other, and share a sense of identity. It's a disease of closeness. And the cure can't be distance, because distance destroys the thing that makes teams effective in the first place.
Think of it as a volume knob. Cohesion at zero gives you a dysfunctional group that can't coordinate. Cohesion at ten gives you a group that coordinates beautifully around a decision that kills seven astronauts. The setting you want is somewhere around seven: enough shared understanding to move fast, enough structural friction to catch errors before they become irreversible.
How Do You Get Cohesion Without the Pathology?
If you've read about brainstorming, you already know that the most popular tool for "unlocking team creativity" actually suppresses it. The same structural problem applies to groupthink interventions. The most common solution prescribed by management consultants is to appoint a devil's advocate, someone whose job is to argue the other side. It sounds logical. It's also largely ineffective.
Charlan Nemeth at UC Berkeley has spent decades studying dissent in groups, and her research draws a sharp distinction between authentic dissent and performed dissent. When a team member genuinely believes a different position and argues for it, the group generates more original ideas and examines both sides more thoroughly. Even when the dissenter is wrong, authentic disagreement stimulates divergent thinking across the entire group. But when someone is assigned to play devil's advocate, the cognitive benefits evaporate. The brain distinguishes between a person who believes what they're saying and a person performing a role. Performed dissent becomes theater that everyone sees through, and the group returns to its original consensus as soon as the performance ends.
Nemeth's finding has a neuroscience backbone. Remember Klucharev's prediction error signal? That signal fires in response to genuine social disagreement because the brain models it as real information about the environment. If someone truly believes you're wrong, your brain treats it as evidence worth processing. But if someone is playing a part, the brain discounts it the same way it discounts any signal it's learned to classify as noise. You cannot trick the conformity circuit with roleplay. (This is the same reason confirmation bias resists one-time interventions. The hardware doesn't update from a single exercise.)
So what does work? The research points to structural interventions that change the information environment before the conformity machinery has a chance to engage.
Gary Klein's pre-mortem technique is one of the most empirically supported. Instead of asking "what could go wrong?" the pre-mortem starts with a premise: the project has already failed. Team members independently write down reasons for the failure, then share them. Research on prospective hindsight found that imagining a future event as already having occurred increased the ability to identify reasons for the outcome by 30 percent. The pre-mortem works because it reframes dissent as imagination rather than opposition. You're not telling the group they're wrong. You're describing a future where things went wrong. One triggers the social pain circuit. The other engages the simulation network.
The Pre-Meeting Write-Down from your team is lying to you attacks the problem at a different angle. By having every participant write their position independently before the meeting starts, you capture assessments before the anchoring effect of the first speaker can distort them. The minority position enters the room before anyone has to endure the neural cost of voicing it live.
And then there's the simplest structural change, one that almost nobody implements: capping decision groups at four or five people. Asch's conformity research showed that social pressure increases sharply up to about four or five unanimous peers and then plateaus. Every person you add beyond that increases conformity pressure without meaningfully increasing information. If your "strategic alignment meeting" has twelve people in it, you haven't built a decision-making body. You've built a conformity engine.
Try This: The Groupthink Audit
Most teams don't recognize groupthink until after a bad outcome. This protocol is designed to surface it before the decision ships.
After your next major team decision, but before implementation, run through these five questions independently. Every participant answers in writing. Answers are collected anonymously.
First: "If this decision fails, what is the most likely reason?" Not what could go wrong in the abstract. What specific assumption, if wrong, would make this particular decision a disaster. If every answer converges on the same assumption, that assumption hasn't been stress-tested.
Second: "What information would change my mind about this decision?" If a participant can't identify any, they've stopped evaluating and started defending. That's the illusion of invulnerability wearing a rational disguise.
Third: "Did I hold back a concern during the discussion? If so, what was it?" This question only works with anonymity. It surfaces the self-censorship that Janis identified as one of the eight core symptoms. If even one person writes something substantive here, your meeting format has a structural gap.
Fourth: "Who in this room do I think disagreed but didn't say anything?" People are often better at detecting suppressed dissent in others than admitting it in themselves. If three people identify the same quiet person, that's a signal worth investigating.
Fifth: "Was this decision made, or did it happen?" Decisions that are actively made involve a clear moment where alternatives are weighed and one is chosen. Decisions that "happen" emerge from momentum, anchoring, and social proof. If no one can point to the moment the decision was actually made, groupthink was likely doing the deciding.
Collect the answers. Read them aloud without attribution. If the anonymous answers diverge sharply from the spoken consensus, you have your diagnosis. The gap between what people said in the room and what they wrote in private is the exact size of your groupthink problem.
Jerald Mason didn't think he was overriding the engineers. He thought he was making a management decision. Mulloy didn't think he was pressuring a contractor. He thought he was challenging weak data. That's the defining feature of the phenomenon: it's invisible from inside the group. The illusion of unanimity feels exactly like actual unanimity. The reward signal that fires when everyone nods feels exactly like the reward signal that fires when you've made the right call.
Roger Boisjoly spent the rest of his life speaking about what happened in that teleconference room. He was shunned by his colleagues at Morton Thiokol. Allan McDonald, who refused to sign the launch authorization, was demoted. The engineers who were right were punished. The managers who were wrong were protected by the group's consensus. Dissent is expensive, agreement is rewarded, and the system optimizes for the signal that feels good rather than the signal that's true.
Your team doesn't need to be deciding whether to launch a spacecraft for this to matter. Every strategy meeting where the most senior person speaks first. Every product review where concerns get waved away. Every hiring decision where everyone "just has a good feeling." The machinery is the same. The only variable is the stakes.
The next chapter of What Everyone Missed unpacks the other side of this equation: the teams that consistently avoid groupthink aren't the ones with the smartest people. They're the ones with the best-designed friction.
FAQ
What is groupthink and why is it dangerous? Groupthink is a mode of thinking that occurs when a cohesive group's desire for unanimity overrides realistic appraisal of alternatives. First described by Irving Janis in 1972, it includes eight symptoms: illusion of invulnerability, collective rationalization, belief in moral superiority, stereotyping of outsiders, self-censorship, illusion of unanimity, pressure on dissenters, and the emergence of mindguards. It's dangerous because it looks and feels identical to genuine team alignment from the inside, which means groups rarely detect it before a decision has been made and implemented.
What does neuroscience reveal about why people conform in groups? Vasily Klucharev's 2009 fMRI research showed that disagreeing with a group triggers two simultaneous neural responses: the rostral cingulate zone generates a prediction error signal (the same signal produced by factual mistakes), and the ventral striatum reduces activity (withdrawing the reward associated with social belonging). Gregory Berns's 2005 study found that conformity pressure actually alters perception itself, changing activity in visual cortex regions rather than just decision-making areas. The brain doesn't simply choose to go along with the group. It rewrites what you see to match what the group believes.
Why doesn't assigning a devil's advocate prevent groupthink? Research by Charlan Nemeth at UC Berkeley found that authentic dissent stimulates divergent thinking, broader information search, and higher-quality decisions, but assigned devil's advocacy does not produce the same cognitive benefits. The brain distinguishes between a person who genuinely believes a different position and a person performing a role. Performed dissent is processed as noise rather than meaningful social information, so it fails to trigger the prediction error recalculation that makes authentic disagreement cognitively valuable.
How can teams prevent groupthink without destroying team cohesion? The research points to structural interventions rather than cultural appeals. Effective approaches include Gary Klein's pre-mortem technique (imagining the project has already failed and independently generating reasons), the Pre-Meeting Write-Down (collecting independent written positions before discussion), and capping decision groups at four or five people. The goal is not to eliminate cohesion but to introduce enough structured friction that critical information surfaces before decisions become irreversible.
What is the relationship between groupthink and confirmation bias? Groupthink and confirmation bias are related but distinct. Confirmation bias operates at the individual level, causing a single brain to seek evidence supporting existing beliefs while filtering contradictory evidence. Groupthink amplifies this to the group level: once a consensus forms, the group collectively rationalizes supporting evidence and dismisses contradictory data. The conformity mechanisms documented by Klucharev and Berns mean that group agreement strengthens individual confirmation bias by triggering reward signals when beliefs align and error signals when they diverge.
Works Cited
- Janis, I. L. (1972). Victims of Groupthink: A Psychological Study of Foreign-Policy Decisions and Fiascoes. Houghton Mifflin.
- Klucharev, V., Hytönen, K., Rijpkema, M., Smidts, A., & Fernández, G. (2009). "Reinforcement Learning Signal Predicts Social Conformity." Neuron, 61(1), 140–151. https://doi.org/10.1016/j.neuron.2008.11.027
- Berns, G. S., Chappelow, J., Zink, C. F., Pagnoni, G., Martin-Skurski, M. E., & Richards, J. (2005). "Neurobiological Correlates of Social Conformity and Independence During Mental Rotation." Biological Psychiatry, 58(3), 245–253. https://doi.org/10.1016/j.biopsych.2005.04.012
- Asch, S. E. (1951). "Effects of Group Pressure Upon the Modification and Distortion of Judgments." In H. Guetzkow (Ed.), Groups, Leadership, and Men (pp. 177–190). Carnegie Press.
- Nemeth, C. J., Brown, K., & Rogers, J. (2001). "Devil's Advocate Versus Authentic Dissent: Stimulating Quantity and Quality." European Journal of Social Psychology, 31(6), 707–720. https://doi.org/10.1002/ejsp.58
- Klein, G. (2007). "Performing a Project Premortem." Harvard Business Review, 85(9), 18–19.
- Mitchell, D. J., Russo, J. E., & Pennington, N. (1989). "Back to the Future: Temporal Perspective in the Explanation of Events." Journal of Behavioral Decision Making, 2(1), 25–38.
- Hermann, A., & Rammal, H. G. (2010). "The Grounding of the 'Flying Bank.'" Management Decision, 48(7), 1048–1062. https://doi.org/10.1108/00251741011068761
- Presidential Commission on the Space Shuttle Challenger Accident (1986). Report of the Presidential Commission on the Space Shuttle Challenger Accident (Rogers Commission Report). Washington, D.C.