Facilitated by: Carlijn Kriek & Rosanne van Duyvenvoorde
AI adoption is often framed as a technical challenge. This table flipped the lens: what if the real barrier isn’t tools, but trust? We explored how organizations can build the psychological safety that makes it possible for people to try, fail, and learn with AI.
Several participants noted the gap between “nice colleagues” and true candour. Engagement surveys often show high scores for atmosphere, but that doesn’t mean people feel safe to speak up. “It’s easy to be friendly on the surface,” one attendee said, “but much harder to tell someone what you didn’t like about their work.”
Without this foundation, introducing AI can heighten anxiety rather than spark curiosity.
The group discussed the paradox of leaders expecting openness without showing it themselves. “Nobody comes to me with their struggles,” one leader admitted, before realizing: “that’s because I’ve never been vulnerable first.”
But what does “being vulnerable” look like in practice? It doesn’t have to mean oversharing. It means modeling doubt, admitting mistakes, and creating space for others to experiment without fear of backlash.
Fear was a recurring theme. Some avoid AI because of job security concerns; others because they feel too close to retirement to bother. As one participant put it: “Change is constant, but every change triggers fight-or-flight in people. AI is just the latest wave.”
Psychological safety is what allows people to process that fear together instead of in silence.
Another thread was the stigma of saying out loud: “AI helped me with this.” While many already use tools behind the scenes, few feel safe enough to disclose it. Making transparency normal — whether in a strategy draft or a team email — helps reduce shame and builds collective learning.
“If leaders want adoption, they need to make it safe to try — not just demand results.”
The group agreed: when teams feel safe, experimentation follows naturally. When they don’t, adoption stalls. Psychological safety is not just a “nice-to-have” — it’s the soil in which innovation takes root.
Psychological safety for AI adoption isn’t about protecting people from mistakes. It’s about encouraging them to make the small ones that lead to big breakthroughs. Or as one participant summed it up: “Safety isn’t the end goal — it’s the condition that lets curiosity flourish.”