AI bias is usually talked about in the context of algorithms: distorted data sets, erroneous results, and stereotypes built into models. But new research suggests there’s another, more subtle problem: who gets to use AI in the first place. According to a recent report from Lean In, women are less likely than men to use AI tools at work, and even when they do, they are less likely to receive recognition or support for it.
The numbers paint a clear picture. Men are more likely to use AI regularly (33% vs. 27%), have used it at work, and are significantly more likely to be encouraged by managers to adopt it. It’s not just about access, but also about perception. Women are more likely to worry about the risks of AI, question its accuracy, and even fear being judged for its use, including worrying that it could be viewed as a “fraud.”
Why this is more important than it seems
Chances are good that this gap could widen quickly. AI is quickly becoming a core competency in the workplace, and early adoption often leads to better opportunities. If a group consistently uses it less or receives less recognition for it, over time this gap can become a real career disadvantage. And this doesn’t happen in isolation. Extensive research already shows that women are underrepresented in technology and AI roles, meaning they not only use these tools less but are also less involved in their development.
The interesting thing about it is how familiar it feels. This is not a new type of bias; It’s an old one just appearing in a new place. The same patterns seen in the workplace for decades – with less recognition, less encouragement and more control – are now evident in the way AI is adopted and used.
Same bias, new technology?
As AI becomes a core competency in the workplace, even small gaps like these can lead to missed opportunities, slower career growth, and less exposure to shaping the technology itself. Because if the people using AI aren’t equally represented, the future it builds won’t be either.




