Britain’s media regulator Ofcom has made “urgent contact” with xAI, the artificial intelligence company owned by Elon Musk, after reports that its Grok chatbot can be used to generate sexualized images of children and non-consensual explicit images of women.
The intervention follows widespread concern about Grok’s image generation capabilities
Ofcom confirmed it is investigating whether Grok’s use violates the UK’s Online Safety Act, which makes it illegal to create or share intimate or sexually explicit images, including AI-generated “deepfakes”, without a person’s consent.
An Ofcom spokesman said the regulator was also investigating allegations that Grok had produced “undressed images” of individuals, adding that tech companies had a legal obligation to take appropriate measures to prevent UK users from encountering illegal content and to quickly remove that material once flagged.
X has not publicly responded to Ofcom’s request for clarification. However, over the weekend, the platform warned users against using Grok to generate illegal material, including child sexual abuse images. Musk also posted on
Despite this, Grok’s own acceptable use policy, which expressly prohibits depictions of real people in pornographic ways, appears to have been routinely circumvented. Images of high-profile figures, including Catherine, Princess of Wales, were reportedly among the images manipulated using the AI tool.
The Internet Watch Foundation confirmed that it had received reports from members of the public about images created by Grok. However, it said it had not yet identified any content that exceeded the legal threshold for classification as child sexual abuse material under UK law.
The issue has also attracted attention outside the UK. The European Commission said it was “seriously examining” the matter, while regulators in France, Malaysia and India are reportedly examining whether Grok is violating local laws.
Thomas Regnier, a spokesman for the European Commission, called the content “appalling” and “disgusting” and said there was “no place” for such material in Europe. X was fined €120m (£104m) by EU regulators in December for breaching its obligations under the Digital Services Act.
Criticism from British politicians has increased. Dame Chi Onwurah, chair of the Science, Innovation and Technology Committee, said the allegations were “deeply disturbing” and argued that existing safeguards were failing to protect the public. She called the online safety law “woefully inadequate” and called for stronger enforcement powers over social media platforms.
The controversy has also highlighted the human impact of AI misuse. Journalist Samantha Smith told the BBC that seeing AI-generated images of herself in a bikini was “just as hurtful as if someone had posted a really explicit image.”
“It looked like me. It felt like me. And it was dehumanizing,” she said.
The Home Office confirmed it is pushing ahead with legislation to ban “nudity” tools outright, proposing a new criminal offense that would see providers of such technology face prison sentences and significant fines.
As regulators tighten scrutiny, the Grok episode has become a flashpoint in the broader debate about AI accountability, platform responsibility and the limits of free expression in the age of generative technology.




