Ofcom investigates Elon Musk’s X over Grok AI sexual deepfakes

UK communications regulator Ofcom has opened a formal investigation into X, the platform owned by Elon Musk, after reports emerged that its AI assistant Grok was being used to produce and circulate sexualised images. The probe, launched in early January, focuses on alleged non consensual intimate images and sexualised depictions of children shared on the service. Ofcom says it will assess whether X removed illegal content promptly and whether the company employed effective age assurance to prevent access by minors. If the regulator finds breaches of UK law the firm could face a penalty of up to 10 percent of worldwide turnover or a fixed fine of up to £18 million, whichever is higher.

Key takeaways

  • Ofcom has begun an investigation into X over reports that Grok was used to generate sexualised images, including alleged child sexual imagery, and to share them on the platform.
  • The regulator will examine whether X removed illegal content quickly and whether it used highly effective age verification to shield children from pornographic material.
  • If found in breach Ofcom may fine X up to 10 percent of global revenue or £18 million, whichever is greater, and can seek court orders to block access in the UK.
  • BBC reporting includes cases where women were digitally undressed, with one person saying more than 100 sexualised images were made of her.
  • Senior politicians and victims called for rapid action, with Technology Secretary Liz Kendall urging Ofcom to complete the probe swiftly.
  • International reactions have already included temporary blocks of Grok in Malaysia and Indonesia while regulators review the tool.
  • X has pointed to a safety post saying users prompting Grok to create illegal content would face the same consequences as uploading illegal material.

Background

Generative AI image tools can synthesize realistic photographs from text prompts, a capability that has spread quickly across social media platforms. Grok, an AI assistant available on X, added image generation functions that critics say were insufficiently controlled when made public. Non consensual intimate images and sexualised depictions of minors are illegal in the UK and trigger statutory duties for platforms under media and communications law. Ofcom enforces those duties for services designated in the UK, giving it powers to investigate, fine and in extreme cases seek a business disruption order to stop access.

The debate has highlighted tensions between rapid feature rollout and content safety controls. Regulators, victims and some lawmakers say safeguards such as content moderation, prompt takedown processes and robust age assurance should be mandatory before release. Industry defenders argue that enforcement should be proportionate and that technical mitigation remains an evolving field. The current investigation follows public complaints, media reporting and examples seen by the BBC that prompted the regulator to act.

Main event

Ofcom announced the probe after receiving reports it described as deeply concerning about Grok being used to create and share undressed images of people and sexualised images of children. The regulator said it will investigate whether X failed to remove illegal content once it became aware of it, and whether X took appropriate steps to prevent users in the UK from seeing such material. Ofcom also said it would examine age assurance measures claimed to block children from accessing pornographic images.

X referred inquiries to a safety statement posted by its Safety account in early January which warned that users who prompt Grok to create illegal content would face the same consequences as if they uploaded illegal content themselves. Elon Musk responded on the platform by accusing the UK government of seeking an excuse for censorship, after a post questioned why other AI services were not under similar scrutiny.

The BBC has reviewed multiple altered images posted on X where women were digitally undressed and placed in sexualised scenes without their consent. One woman reported that more than 100 sexualised images had been generated depicting her. Another person described finding an AI generated bikini image of herself superimposed onto the Auschwitz site, a case raised by a former cabinet minister as evidence of the harms produced.

If X fails to comply with Ofcom requirements the regulator can apply to a UK court for orders, including a business disruption order. Such an order could require internet service providers to block access to the site in the UK, though regulators typically treat blocking as a measure of last resort. Ofcom said the investigation would be a matter of highest priority given the potential risk to children and other victims.

Analysis & implications

The investigation tests how regulators apply existing communications law to emergent AI capabilities. Ofcom’s remit covers content that is illegal in the UK, and its power to issue fines or seek blocking orders signals significant legal risk for platforms that roll out image generation without adequate safeguards. For X, the consequences could include financial penalties and reputational damage that may affect user trust and advertiser relationships.

Technically, stopping AI misuse requires a combination of prompt detection, human review capacity, robust content policy enforcement and effective user identity and age verification. Each approach has trade offs: stronger verification can protect children but raises privacy and onboarding concerns, while overbroad moderation risks suppressing lawful speech. For policymakers, the case amplifies calls to set clearer standards for AI safety and platform accountability.

Internationally, other regulators may take cues from Ofcom’s approach, especially where cross border harms are involved. Temporary blocks in Malaysia and Indonesia show that national responses can vary, producing fragmentation in how AI features are accessed globally. The prospect of fines tied to global revenue also increases the stakes for companies operating in multiple jurisdictions.

Comparison & data

Regulatory power Potential outcome
Fine Up to 10 percent of worldwide revenue or £18 million
Business disruption order Court order to require ISPs to block access in the UK
Safety checks Assessment of takedown speed and age assurance effectiveness

The table summarises the primary enforcement tools Ofcom cited when it announced the probe. The monetary cap of 10 percent of global turnover mirrors penalties used by other regulators for serious breaches, while the £18 million figure represents a statutory alternate fine amount. Ofcom will apply these measures according to its assessment of harm and the platform’s compliance actions.

Reactions & quotes

Government figures welcomed the investigation and urged a swift outcome, framing the matter as urgent for victims and the public. Technology Secretary Liz Kendall said she supported Ofcom completing the probe quickly so victims would not face delays in remedy. The following statement encapsulates the official tone of concern.

It is vital that Ofcom complete this investigation swiftly because the public and most importantly the victims will not accept any delay

Liz Kendall, Technology Secretary

Former minister Peter Kyle described his reaction to cases raised with him as appalling, citing a recent example involving a Holocaust related image that made him feel deeply disturbed. He urged that new AI tools be tested more thoroughly before features are rolled out to users.

The fact that I met a woman who found an image of herself generated outside Auschwitz made me feel sick to my stomach

Peter Kyle, former Technology Secretary

Victims and campaigners stressed that labelling the investigation as censorship risks deflecting from concrete harms. Dr Daisy Dixon, who reported repeated instances of being digitally undressed by Grok outputs, welcomed the probe and urged X to act rather than dispute the complaint.

For Musk and others to call this an excuse for censorship just deflects from systematic violence against women and girls

Dr Daisy Dixon, victim and campaigner

Unconfirmed

  • The full scale of child sexual imagery generated by Grok across X has not been publicly quantified and remains under investigation.
  • It is unconfirmed whether other AI image generators on rival platforms produced equivalent volumes of illegal images in the same period.
  • The timeline for Ofcoms final decision and any subsequent court proceedings has not been set and may vary depending on the evidence gathered.

Bottom line

This investigation places Ofcom at the center of how Britain will regulate AI driven image creation within communications law. For platforms, the case underlines that rapid feature deployment without robust safeguards can expose companies to financial penalties, legal orders and swift regulatory scrutiny. For victims, the inquiry seeks to address harms that can be persistent and wide reaching, including repeated circulation of non consensual images.

Observers should watch three things going forward: how quickly Ofcom completes evidence gathering, the technical measures X implements to prevent recurrence, and whether other regulators take parallel action. The outcome could shape expectations for platform responsibility and inform future rules governing AI content generation.

Sources

  • BBC News (UK public broadcaster, news report)

Leave a Comment