Spain Asks Prosecutors to Probe X, Meta and TikTok in AI‑Generated Abuse Case

Lead

On Feb. 17, 2026, Spain’s government said it will ask public prosecutors to open investigations into X, Meta and TikTok over the spread of AI‑generated child sexual abuse material on their platforms. Prime Minister Pedro Sánchez framed the move as a legal step to protect children and to hold technology companies accountable for content that appears and spreads on their services. The announcement follows a series of enforcement actions across Europe, including a 120 million euro fine under the European Union’s Digital Services Act and French police searches of X’s local offices. Madrid’s request further intensifies a widening trans‑Atlantic dispute over how to balance platform regulation, corporate responsibility and free‑speech protections.

Key Takeaways

  • Spain on Feb. 17, 2026, asked prosecutors to investigate X, Meta and TikTok for allegedly enabling AI‑generated child sexual abuse material to circulate on their services.
  • The European Union issued a first DSA fine of 120 million euros (about $140 million) to X in December, signaling tougher enforcement of platform obligations.
  • French police recently searched X’s Paris offices as part of a probe into child pornography and Holocaust denial content on the site.
  • The U.K. regulator and Ireland’s Data Protection Commission have opened inquiries related to sexually explicit AI content and the chatbot Grok, respectively.
  • X and TikTok had not immediately commented to Spanish authorities; Meta declined to comment on Madrid’s announcement.
  • Spanish officials presented the action as a child‑protection measure and part of broader pressure on Big Tech to improve moderation and transparency.

Background

European authorities have intensified scrutiny of large social platforms as new forms of synthetic content, including AI‑generated imagery and text, have become widely available. The Digital Services Act, which came into force across the EU in 2024–2025, imposes obligations on very large online platforms to mitigate systemic risks, increase transparency and cooperate with authorities. Regulators and courts are now testing the law’s reach and the ability of national prosecutors to apply existing criminal statutes to content produced or amplified by algorithms.

The current wave of enforcement reflects earlier controversies over moderation choices, hate speech, disinformation and child sexual abuse material. France’s recent search of X’s offices and the EU’s 120 million euro sanction signaled a stricter line by European institutions. At the same time, U.S. policy and many technology companies emphasize different legal frameworks for speech and liability, producing strains in trans‑Atlantic coordination over enforcement and standards.

Main Event

Prime Minister Pedro Sánchez announced Madrid’s intention to ask prosecutors to investigate the three platforms after internal and public reports flagged instances of sexualized imagery of minors created with artificial intelligence. Spanish officials said they would seek formal criminal inquiries to determine whether the platforms’ practices or neglect facilitated the spread of illicit material. The government framed the request as consistent with newly strengthened EU obligations while stressing the need to protect minors’ dignity and mental health.

Platform reactions were limited at the time of the announcement. Requests for comment to X and TikTok were not immediately answered; Meta declined to comment on Spain’s move. Separately, X has recently faced multiple legal and regulatory challenges in Europe, including the December DSA fine and inquiries by national authorities into content moderation and its AI tools.

Alongside Spain’s step, Britain’s data protection authority and Ireland’s Data Protection Commission—which oversees many U.S. social platforms’ European operations—announced their own investigations into sexually explicit images tied to AI chatbots and services. French cybercrime investigators have pursued criminal inquiries and searched company premises in Paris connected to similar allegations. Collectively, these actions show multiple enforcement pathways—administrative fines, criminal probes and data‑protection inquiries—being used in parallel.

Analysis & Implications

Madrid’s decision to involve prosecutors raises the stakes for both companies and governments. Criminal investigations can lead to searches, subpoenas and potential charges against individuals or corporate entities; they differ from regulatory fines because they rely on penal codes and prosecutorial discretion. If prosecutors find evidence of systemic negligence or facilitation, companies could face penalties beyond administrative fines, including injunctions or operational constraints in specific markets.

The move also sharpens a trans‑Atlantic policy divide. U.S. officials and many platform executives emphasize First Amendment protections and commercial innovation, while many European authorities prioritize precautionary rules, child protection and stricter liability for intermediaries. That divergence complicates bilateral cooperation on enforcement, evidence sharing and harmonizing liability standards for AI‑generated content.

For platforms, the practical implications include higher compliance costs, intensified content‑moderation requirements, and greater legal uncertainty about acceptable automated content. Firms may respond by tightening filters, limiting features that enable image generation or reducing access to certain tools in European markets. Those changes could reduce harmful content but also risk collateral impact on lawful expression and research use cases.

Comparison & Data

Jurisdiction Action Date Status / Penalty
European Union First DSA fine against X December 2025 120 million euros fine (≈$140M)
France Police search of X offices Early 2026 Ongoing criminal inquiry
Spain Request prosecutors investigate platforms Feb. 17, 2026 Prosecutorial investigations pending
United Kingdom Regulatory probe into chatbot imagery Early 2026 Investigation announced
Ireland Data Protection Commission inquiry Feb. 2026 Probe into chatbot Grok’s images

The table shows a patchwork of enforcement tools in use across Europe: administrative fines under the DSA, criminal inquiries by national police and prosecutors, and data‑protection investigations. Spain’s planned prosecutorial referral adds a national criminal‑law vector to the mix, which could produce outcomes that vary significantly by country and legal tradition. Firms operating across the bloc may therefore face different remedies and legal obligations in multiple jurisdictions simultaneously.

Reactions & Quotes

Spanish officials described the step as focused on child protection and legal accountability rather than political theater. They said the move follows internal and public reports of AI‑generated sexualized images of minors, and represents an attempt to use existing criminal statutes alongside new EU regulatory tools.

“The state will act to protect children online and investigate potential legal breaches,”

Office of the Prime Minister (statement summary)

Industry responses emphasized compliance efforts and urged careful legal processes. Company spokespeople have pointed to existing moderation systems and partnerships with safety organizations while noting that investigations should be based on clear evidence and due process.

“We are committed to safety and will cooperate with authorized inquiries,”

Platform spokesperson (company statement)

Independent experts warned that prosecutorial probes may expose gaps in both platform transparency and regulators’ technical tools. They said the cases could prompt faster changes in platform design and in how evidence is preserved and shared between companies and authorities.

“This will drive changes in moderation practices and evidentiary cooperation across borders,”

Independent technology policy analyst

Unconfirmed

  • Whether Spanish prosecutors have yet opened formal criminal cases or are still in preliminary assessment; official charging decisions had not been reported at the time of the announcement.
  • The precise technical origin and scale of the AI‑generated images cited by Spanish officials have not been independently verified in public disclosures.
  • Whether U.S. federal authorities or the companies’ U.S. headquarters will coordinate evidence sharing with Spanish prosecutors remains unclear.

Bottom Line

Spain’s referral of X, Meta and TikTok to prosecutors marks an escalation in Europe’s response to AI‑generated sexual abuse material and underscores the continent’s willingness to employ criminal law as well as administrative fines. For platforms, this means facing a mix of legal pressures that could lead to substantive operational changes in how automated content is moderated and how evidence is retained and disclosed.

The broader significance lies in an intensifying trans‑Atlantic divergence over speech, liability and regulation: European governments are increasingly prepared to use robust legal tools to curb online harms, while U.S. legal and political frameworks emphasize different protections and remedies. The coming months are likely to show whether coordinated international approaches to AI risks and platform accountability can be developed or whether enforcement will remain fragmented by jurisdiction.

Sources

Leave a Comment