Growth & Strategy

Psychological Safety: Why Your Team Won't Tell You the Truth

In 2012, Google decided to answer a question that had consumed organizational psychologists for decades: what makes a team effective? They called it Project Aristotle, a nod to the philosopher's observation that the whole is greater than the sum of its parts. Julia Rozovsky, a researcher in Google's People Analytics division, led the effort. Her team studied 180 groups across the company, 115 engineering teams and 65 sales teams, examining over 250 different attributes. They looked at everything: educational backgrounds, personality types, seniority levels, whether team members socialized outside of work, how often they ate lunch together. They cross-referenced performance data, manager evaluations, and revenue metrics. They expected to find that the best teams were composed of the smartest individuals, or the most experienced, or the most diverse.

They found none of that.

The composition of the team, who was on it, explained almost nothing about performance. What explained performance was how the team worked together. And one factor towered over every other: psychological safety. It was, by a significant margin, the strongest predictor of team performance, more consequential than dependability, structure, meaning, or impact. The teams where members felt safe to take risks, to admit mistakes, to ask questions that might sound uninformed, to disagree with the most senior person in the room — those were the teams that shipped better products, hit their targets, and retained their people. The teams with brilliant individuals who operated in an atmosphere of judgment and self-protection consistently underperformed teams of less credentialed people who trusted each other.

The term itself came from a Harvard professor named Amy Edmondson, who had defined it nearly fifteen years before Google's study. Her definition: a shared belief that the team is safe for interpersonal risk-taking. Not safe from hard work. Not safe from accountability. Safe from the social consequences of being wrong, being confused, or being the person who raises the uncomfortable question.

Google spent millions of dollars and two years of research to arrive at a conclusion that sounds, in retrospect, almost too simple. The best teams aren't built from the best people. They're built from people who aren't afraid to talk to each other. The question is why that's so rare — and the answer is in the brain.

Your Brain Thinks the Meeting Room Is a Savanna

In 2003, Naomi Eisenberger at UCLA designed an experiment that would rewrite the neuroscience of social behavior. She put thirteen participants into an fMRI scanner and had them play a simple virtual ball-tossing game called Cyberball. The participants believed they were playing with two other people online. For the first few minutes, the game proceeded normally — three people passing a ball back and forth on screen. Then, without warning, the other two players stopped throwing the ball to the participant. Simple exclusion. A ball game with strangers.

The fMRI data showed something the researchers expected but had never been able to demonstrate so cleanly: social exclusion activated the dorsal anterior cingulate cortex and the anterior insula, the same neural regions that process physical pain. Not metaphorical pain. The same circuits that fire when you touch a hot stove or break a bone. The more intensely these regions activated, the more distress participants reported. The brain was using its pain hardware to process the experience of being left out of a computer game with people the participants had never met and would never see again.

Eisenberger's explanation was evolutionary, and it reframes everything about how teams communicate. For roughly 200,000 years of human history, exclusion from the group was a death sentence. There was no surviving alone on the savanna. The brain didn't develop a separate system for tracking social threats because social threats were survival threats. Getting rejected by your tribe and getting attacked by a predator required the same urgency of response, so the brain wired them to the same alarm.

That alarm hasn't been updated. It doesn't know the difference between "the tribe is casting me out" and "I'm about to say something in this product meeting that contradicts what the VP just said." Both register as threat. Both activate the anterior cingulate. Both flood the system with the neurochemistry of danger.

And the damage compounds. When the brain detects a social threat, the amygdala triggers a cascade that suppresses the prefrontal cortex, the region responsible for complex reasoning, creative problem-solving, and strategic thinking. Cortisol floods the system. Blood flow redirects away from the brain's executive centers and toward the muscles, as if you're about to run from something. The neurochemistry researcher Judith Glaser described it as a seesaw: cortisol and oxytocin (the hormone associated with trust and connection) operate in opposition. When cortisol is high, oxytocin is suppressed. When the brain perceives threat, the very neurochemistry of trust switches off.

This means that the moment someone in your meeting feels unsafe, the moment they anticipate judgment, dismissal, or embarrassment, two things happen simultaneously. First, their brain suppresses the impulse to speak. Second, even if they do speak, their prefrontal cortex is operating at reduced capacity. The insight they're sitting on, the one your team actually needs, arrives degraded or doesn't arrive at all. You haven't just lost their willingness to contribute. You've lost their cognitive capacity to contribute well.

David Rock's SCARF model, published in 2008, maps five domains of social threat that trigger this response: Status, Certainty, Autonomy, Relatedness, and Fairness. A single meeting can threaten all five. The junior engineer who gets interrupted by the CTO just experienced a status threat. The designer who doesn't know whether her work will be praised or torn apart is experiencing a certainty threat. The product manager whose recommendation was overruled without discussion experienced an autonomy threat. Every one of these registers in the anterior cingulate the same way Eisenberger's Cyberball exclusion did. Every one pushes the brain further into self-protection mode, further from the candor the team needs.

The Hospital Paradox: When More Errors Mean Fewer Deaths

In the mid-1990s, Amy Edmondson was a doctoral candidate at Harvard Business School studying the relationship between teamwork and errors in hospitals. Her hypothesis was clean and intuitive: better teams should make fewer mistakes. She distributed surveys measuring teamwork quality across nursing units at two hospitals, and then tracked medication error rates over six months. The response rate was 55 percent. The data came in. And it was backwards.

The units with the strongest teamwork reported the highest error rates. Not slightly higher. Noticeably higher. The correlation ran in the opposite direction of her prediction. Better teams appeared to be making more mistakes.

Edmondson later described this as a moment of genuine panic. Her dissertation was built on a hypothesis the data had just inverted. She could have buried the finding, reframed the question, or explained it away. Instead, she sat with the discomfort and asked a different question: what if better teams don't make more errors? What if they report more errors?

She went back to the units and started interviewing. On the high-performing units, nurses described a culture where mistakes were discussed openly. Reporting an error was treated as information, not confession. The head nurses responded to reported errors by asking what had happened and how the system could be adjusted to prevent it. On the low-performing units, nurses used different language entirely. "You get put on trial." "Mistakes are held against you." The errors were still happening, medication errors in hospitals are common and well-documented, but the information about them was being suppressed. Nurses on psychologically unsafe units were doing exactly what Eisenberger's brain data would predict: protecting themselves from social threat by staying silent.

To validate this interpretation, Edmondson hired a researcher named Andy Molinsky to observe the unit cultures without knowing any of the error data. Molinsky ranked each unit's openness based solely on what he saw: how people talked to each other, whether questions were welcomed, whether junior staff spoke up to senior clinicians. His rankings were nearly perfectly correlated with the detected error rates. The units where people felt safe to speak were the units where errors were caught, discussed, and corrected before they reached patients. The units where people stayed silent were the units where errors compounded.

This is the finding that should keep every founder awake at night. The psychologically unsafe teams didn't just communicate less. They produced worse outcomes, outcomes that were invisible because the information about them was being suppressed. The errors didn't disappear because no one reported them. They metastasized. The units with lower reported error rates weren't safer. They were more dangerous. They just didn't know it because fear had shut off the warning system.

Edmondson published her landmark paper in 1999 in Administrative Science Quarterly, and the concept of team psychological safety entered the organizational science literature. It would take another thirteen years before Google's Project Aristotle confirmed the same pattern at massive scale: the thing that separates high-performing teams from mediocre ones isn't talent, process, or strategy. It's whether people feel safe enough to say what they actually think.

The Three Silent Killers of Team Candor

The neuroscience and the organizational data converge on three specific mechanisms that destroy psychological safety. They don't require a toxic culture. They don't require a screaming executive. They operate in perfectly well-intentioned teams, in companies led by thoughtful people, every single day.

The First-Speaker Effect

In 2018, Stephan Hartmann and Soroush Rafiee Rad published a computational model of group deliberation that demonstrated something disquieting: the person who speaks first in a discussion has a disproportionate influence on the group's final conclusion, even when every participant is rational, truth-seeking, and free from social hierarchy. The first voice installs a reference point. Every subsequent speaker must expend cognitive effort to deviate from it. Conformity researchers have documented this since Solomon Asch's line-matching experiments in the 1950s, but Hartmann and Rafiee Rad showed that the anchoring effect of the first speaker exceeded the influence of seniority, opinion popularity, and group composition. The format itself, sequential discussion, is biased toward the first voice.

When the first speaker is the most senior person in the room, the effect compounds. Jeff Bezos understood this well enough to engineer around it. At Amazon, meetings run in reverse order of seniority. The most junior person speaks first. Bezos speaks last. His reasoning, in his own words: "If I speak first, even very strong-willed, highly intelligent, high-judgment participants in that meeting will wonder, 'Well, if Jeff thinks that, maybe I'm not right.'" A founder who built a trillion-dollar company, acknowledging that his own voice corrupts the information system he depends on.

The Punishment Prediction

The brain doesn't need to be punished for speaking up. It only needs to predict that speaking up will be punished. This is the mechanism Edmondson uncovered in her hospital research and that Vuori and Huy documented at Nokia, where middle managers filtered bad news upward not because they had been explicitly told to, but because the culture had cached a prediction about what happened when you didn't.

Culture, in this sense, is not the values on the wall. It's the prediction every person in the organization runs about what happens if they're honest. That prediction updates every time someone watches a colleague get interrupted, dismissed, passed over, or subtly punished for raising an inconvenient truth. The updates don't require dramatic incidents. A founder who checks her phone while someone is presenting a concern has just updated the prediction model for everyone in the room. A leader who responds to a mistake by asking "who did this?" instead of "what happened?" has just taught the team that errors are attached to individuals, not systems. The brain files these micro-moments with the same precision it files any environmental pattern, and the pattern becomes the culture.

The Expertise Trap

Senior leaders and domain experts face a specific version of this problem that is almost invisible to them. When a founder or executive has deep expertise in an area, they tend to respond to questions and concerns with answers rather than curiosity. The response is automatic, if you know the answer, the brain's efficiency circuits push you to provide it. But every time an expert answers instead of asking, they communicate something to the team: the value here is in the answer, not in the question. Over time, this trains the team to bring conclusions rather than concerns, solutions rather than observations, and certainty rather than confusion. The very things that a psychologically safe culture needs, half-formed ideas, early warnings, uncomfortable hunches, get filtered out because the team has learned that the currency in the room is expertise, not inquiry.

This is particularly dangerous for founders, who often are the domain experts in their organization. The thing that makes them good at building the product makes them bad at hearing the signal that the product is going wrong.

Try This: The Safety Audit and Reset Protocol

Psychological safety isn't built through slogans, offsites, or a single conversation where the founder says "I want to hear your honest feedback." It's built through repeated behavioral signals that accumulate over time. The brain updates its prediction model based on what it observes, not what it's told. Here's a protocol grounded in the research.

Step 1: Reverse the speaking order. Start every decision-relevant meeting with the most junior person in the room. Not as a performance, as a structural commitment. The first voice anchors the discussion, and research shows this anchoring effect exceeds the influence of seniority. If the CEO speaks first, every voice that follows is distorted by that anchor. Bezos designed this into Amazon's culture. The research supports it. Do it consistently, not occasionally.

Step 2: Separate assessment from discussion. Before any meeting where a decision needs to be made, send the core question to all participants at least twenty-four hours in advance. Every person writes their position independently, one paragraph, their honest assessment, before entering the room. Collect the written positions before anyone speaks. Read the dissenting views aloud first. This protocol works because writing in isolation lets the brain evaluate without the anterior cingulate's threat circuits firing. The social cost of dissent is removed at the moment the assessment is formed.

Step 3: Respond to bad news with curiosity, not solutions. When someone surfaces a problem, your first response must be a question, not an answer. "What are you seeing?" "What do you think is driving that?" "What would you do if this were entirely your call?" Every time you respond to a concern with a solution, you teach the team that the value is in answers. Every time you respond with genuine inquiry, you teach them that the value is in the signal. This is the hardest step for founders because it requires overriding the expertise trap, the brain's automatic push to demonstrate competence by solving the problem.

Step 4: Normalize error reporting by going first. Edmondson's research is unambiguous: psychological safety flows from the top. The single most powerful behavior a leader can demonstrate is publicly owning a mistake. Not in a polished, retrospective way. In real time. "I was wrong about the timeline I set last sprint." "I made this call and the data shows it was the wrong one." "I missed this signal and I want to understand why." Matt Sakaguchi, a manager in Google's Project Aristotle study, modeled this when he shared his stage 4 cancer diagnosis with his struggling team at an offsite. Not because cancer is a mistake, but because vulnerability from the leader resets the team's prediction model about what is safe to say. You don't need to share medical diagnoses. You need to share errors, openly, before anyone else does.

Step 5: Measure it. Edmondson developed a seven-item survey that directly measures psychological safety. The most diagnostic question: "If you make a mistake on this team, is it held against you?" Administer it anonymously. Track it over time. If the score doesn't move, the behaviors aren't landing, regardless of your intent. The brain responds to observable patterns, not stated intentions. The survey tells you what pattern the team is actually observing.


Google spent two years and millions of dollars studying 180 teams to discover that the most important thing about a team isn't who's on it. It's whether people feel safe enough to be honest. Amy Edmondson found that hospital teams who reported more mistakes had fewer patient deaths, because the information about errors only flows when the cost of honesty is lower than the cost of silence. Naomi Eisenberger showed why: the brain processes social threat using the same neural circuits it uses for physical pain. Speaking up in a room where you might be wrong doesn't feel like a career risk. It feels, to the anterior cingulate cortex, like touching a hot stove.

Your team has information you're not hearing. Not because they're incompetent. Not because they don't care. Because the brain that processes "I should say something" and the brain that processes "the last person who said something like this was interrupted and never brought it up again" are running a cost-benefit analysis on hardware that evolved to keep humans inside the group at any cost.

The design of your meetings, your responses, and your own willingness to go first with honesty is either lowering that cost or raising it. There is no neutral. Every interaction either adds to the team's prediction that candor is safe, or confirms the prediction that silence is safer. And unlike a culture statement on a wall, that prediction is updated in real time, by the anterior cingulate, in every meeting you run.

If your team has gone quiet, the problem isn't engagement. It's neuroscience. The fear of failure that keeps individuals from taking risks is the same circuitry that keeps teams from speaking honestly. And the fix isn't a speech about openness. It's a structural redesign of how information moves through your organization, starting with who speaks first, how mistakes are received, and whether the most powerful person in the room has the discipline to talk last.

Chapter 12 of What Everyone Missed goes deeper into the organizational neuroscience of team communication, including why the teams that argue the most outperform the teams that agree the fastest, and what the research says about building cultures where dissent is structurally protected rather than personally heroic.


FAQ

What is psychological safety in a team? Psychological safety is a shared belief among team members that the group is safe for interpersonal risk-taking. Coined by Harvard professor Amy Edmondson in 1999, it means people can speak up with questions, concerns, mistakes, and half-formed ideas without fear of punishment, humiliation, or diminished status. It is not about being comfortable or avoiding accountability. It is about whether the social cost of honesty is low enough that critical information actually reaches the people who need it. Google's Project Aristotle found that psychological safety was the single strongest predictor of team performance, outweighing all other factors across 180 teams.

Why does speaking up in meetings feel so threatening? Neuroscientist Naomi Eisenberger's 2003 fMRI research demonstrated that social exclusion activates the dorsal anterior cingulate cortex and anterior insula, the same brain regions that process physical pain. When someone anticipates being judged, dismissed, or embarrassed for speaking up, the brain's threat detection system triggers a cascade: cortisol floods the system, the amygdala activates, and the prefrontal cortex, responsible for complex reasoning and strategic thinking, is suppressed. The brain evolved to treat social threat as survival-relevant because for most of human history, exclusion from the group meant death. That wiring hasn't been updated for the conference room.

What did Google's Project Aristotle discover about high-performing teams? Starting in 2012, Google studied 180 teams over two years, examining 250 different attributes. Led by Julia Rozovsky, the research found that team composition, who was on the team, mattered far less than how the team worked together. Psychological safety was the number one factor distinguishing high-performing teams. Teams with high psychological safety consistently outperformed on productivity, innovation, and retention metrics. The four other key dynamics were dependability, structure and clarity, meaning, and impact, but all of them depended on psychological safety as the foundation.

How did Amy Edmondson discover the link between psychological safety and performance? While researching hospital teams as a doctoral student at Harvard, Edmondson expected that the best teams would report the fewest medication errors. She found the opposite: the highest-performing units reported more errors. Further investigation revealed that these teams weren't making more mistakes, they had created an environment where errors could be openly discussed and corrected before reaching patients. Units without psychological safety suppressed error reporting. Nurses described being "put on trial" for admitting mistakes. An independent observer who ranked unit cultures by openness, without seeing any error data, produced rankings nearly perfectly correlated with reported error rates.

What are specific behaviors that build psychological safety? Research points to five structural behaviors rather than general encouragement. First, reverse the speaking order so the most junior person speaks first, eliminating the first-speaker anchoring effect. Second, have team members write independent assessments before meetings to separate evaluation from social pressure. Third, respond to problems with curiosity rather than solutions, asking questions before offering answers. Fourth, publicly own your own mistakes in real time, not retrospectively, to reset the team's prediction about what is safe to say. Fifth, measure psychological safety using Edmondson's validated survey instrument and track changes over time, because the brain responds to observed patterns rather than stated intentions.

How does psychological safety relate to groupthink and teams withholding information? Psychological safety is the mechanism that determines whether a team falls into groupthink or maintains honest communication. When safety is low, the brain's threat circuits make conformity the cheaper option, agreeing costs nothing while dissenting activates pain circuits. This creates conditions for groupthink, where teams converge on comfortable consensus rather than accurate assessment. It also explains why teams withhold information: the prediction that honesty will be punished causes filtering, softening, and omission of critical data, the same dynamic that destroyed Nokia's ability to respond to the iPhone despite having the technical information internally.

Can psychological safety coexist with high performance standards? Yes, and Edmondson's research shows they must. Psychological safety is not about lowering the bar. It is about making it safe to be candid about where the team stands relative to the bar. Her hospital data demonstrated this: teams with the highest psychological safety reported more errors and had fewer patient deaths. They held a higher standard because the information required to maintain it could flow freely. Edmondson calls the combination of high safety and high standards a "learning zone", the only configuration where teams consistently improve. High standards without safety produces anxiety. Safety without standards produces comfort. Neither produces excellence.

Works Cited

  • Edmondson, A. C. (1999). "Psychological Safety and Learning Behavior in Work Teams." Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999

  • Edmondson, A. C. (2018). The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. Wiley.

  • Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). "Does Rejection Hurt? An fMRI Study of Social Exclusion." Science, 302(5643), 290–292. https://doi.org/10.1126/science.1089134

  • Rozovsky, J. (2015). "The Five Keys to a Successful Google Team." re:Work with Google. https://rework.withgoogle.com/blog/five-keys-to-a-successful-google-team/

  • Rock, D. (2008). "SCARF: A Brain-Based Model for Collaborating With and Influencing Others." NeuroLeadership Journal, 1, 44–52.

  • Hartmann, S., & Rafiee Rad, S. (2018). "Voting, deliberation and truth." Synthese, 195(3), 1273-1293. https://doi.org/10.1007/s11229-016-1268-9

  • Glaser, J. E. (2014). Conversational Intelligence: How Great Leaders Build Trust and Get Extraordinary Results. Routledge.

  • Vuori, T. O., & Huy, Q. N. (2016). "Distributed Attention and Shared Emotions in the Innovation Process: How Nokia Lost the Smartphone Battle." Administrative Science Quarterly, 61(1), 9–51. https://doi.org/10.1177/0001839215606951


Reading won't build your business.

The strategies in this post work — but only if you use them. Inside The Launch Pad, you get the frameworks, the feedback, and the accountability to actually execute.

Build Your Exit