The AI gender gap isn’t about capability. It’s about who gets to trust, and who gets trusted.
The trust gap in AI between men and women isn’t closing. It’s holding steady, and in some markets, widening. According to the Edelman Trust Barometer, globally, women are five percentage-points less likely than men to trust companies working in AI subsector. The eight percentage-point gap in the UK is even more stark.
At a recent roundtable on the gender AI gap hosted by Media Trust and IMD, a theme helped surface some of the many challenges: women who resist using AI because they fear it’s cheating. Not wrong. Not boring. Cheating.
The internalising of a moral binary: creativity as authenticity, effort as purity and AI, with all its promise, feels like a shortcut that violates both.
Of course, this tension isn’t new. Gender bias has shaped, and been shaped by, every technological wave before this one, from the way tools are designed to who’s trusted to use them and how.
What struck me most wasn’t just the resistance, but the way it pointed to a deeper disconnect between how tech innovation is unfolding and trust is earned. And maybe, how uncomfortable it remains to sit with questions that don’t yet have clear answers.
The real gap isn’t capability. It’s permission.
What was clear through the conversation that included educators, equalities charities and tech platforms, was how women are not behind in ability but actually ahead in discernment.
They see the complexity and sense the contradictions. But too often, they’re receiving the wrong signals from the people and institutions meant to guide them: don’t cut corners, don’t rely on tools, don’t get it wrong.
Men, by contrast, are often encouraged to experiment. To break things and push boundaries. AI is one of those new boundaries and rewards those who are willing to play this way.
The result? Women withdraw. Not from interest but from perceived misalignment with the story they’ve been told about what “real work” looks like.
That’s not a skill gap. It’s a trust signal we should pay attention to.
This Isn’t Resistance. It’s Calibration.
What I’ve come to realise, especially in the wake of recent research and conversations with Gen Z, is that this generation doesn’t default to trust. It verifies, constantly. Not just facts but tone, intent, alignment.
They don’t want to be told what to think about AI. They want to see how it behaves and judge for themselves.
So when a teenage girl says “AI feels like cheating,” she’s not rejecting the tech. She’s interpreting the cultural framing she’s been given. And she’s noticing where it doesn’t add up.
That’s not resistance. It’s calibration.
What Gen Z is showing us is that trust isn’t a linear variable. It’s lateral, contextual, real-time. And it moves through proximity (amongst peer groups), not proclamation (from authority).
If AI adoption feels emotionally out of reach for girls, the answer isn’t just training. It’s recoding the story.
Not AI as shortcut. But AI as co-author.
This isn’t a youth campaign. It’s a gatekeeper recalibration.
To be clear, this doesn’t just get fixed by marketing to girls. Or by top-down ethics modules. It gets fixed when the people who still hold cultural authority (educators, policymakers, designers, media leads, tech executives) are willing to step into a new kind of exchange.
In the roundtable, these 'gatekeepers' emerged as figures still shaping the norms around what counts as real work, credible creativity, and acceptable use of technology. These are the adults who, often unconsciously, reinforce the idea that using AI undermines authenticity. Their influence isn’t inherently negative but it is structural.
And if permission is going to be reshaped, these are the people who have to be brought into the loop, not just as enablers, but as co-learners. Not as sole holders of legitimacy, but as participants in a loop they don’t fully control.
Reverse mentoring came up again and again, as something that, when done well, helps shift permission back into circulation. But it only works if it’s mutual.
Not youth performing wisdom nor adults performing openness. A real exchange of perspective that allows both sides to recalibrate what counts, what’s credible and what’s next.
That’s what trust looks like when it’s in motion. Not downloaded but shared.
This isn’t about speed. It’s about synchrony.
This whole space, from the gender AI gap to the future of work to rising dislocation with innovation won’t be solved by speed or scale.
I recently wrote about narrative fluency, as the ability to sense what people are ready to hear, to speak with emotional credibility, to communicate across fractures without hardening them. This might be one of the mechanisms we’re overlooking.
It starts by telling stories that match how people actually feel and listening to the friction before trying to remove it. We learn to treat permission as something granted but also exchanged.
That said, it’s not a silver bullet. We should be cautious not to project fluency where there’s still real ambiguity, and to acknowledge that people’s relationship with technology, especially something as fast-moving as AI, is shaped as much by uncertainty as clarity.
We don’t just need to communicate that AI is safe or useful, we must make it make sense in the lives and language of those expected to use it. And that means holding space for discomfort, contradiction, and re-evaluation.
Because if women and girls believe that using AI makes their work less real, less credible and less theirs, we’ve failed to properly listen to the story they’re already telling themselves.
And if we want to change that, we must rewrite or recode the rules by co-authoring the next chapter with them.
Sat Dayal is Managing Director of Technology at Edelman UK.