Outrage Over Discord’s UK Persona Age-Test After ID and Data Concerns

Lead: Discord spurred a fierce backlash in February 2026 after a short UK experiment using Persona for age checks raised fresh privacy and trust concerns. The trial—which Discord says ran for less than one month and involved a small number of UK users—was described in an archived FAQ as storing submitted information for up to seven days before deletion. That disclosure, later removed by Discord, amplified user fears because a previous breach of a former age-check partner exposed about 70,000 government IDs. Both Discord and Persona have since said collected ID data tied to verification was deleted.

Key takeaways

  • Discord ran a UK age-verification experiment with Persona for under one month; the company says only a small number of users were included.
  • An archived FAQ claimed Persona would temporarily store submitted information for up to seven days, with ID details blurred except for photo and date of birth.
  • A prior third-party breach had exposed roughly 70,000 Discord users’ government IDs, heightening sensitivity around any ID collection.
  • Persona’s exposed frontend files numbered 2,456 in a publicly accessible folder, prompting scrutiny of its code and data practices.
  • Persona and Discord state that ID data tied to successful verifications were deleted immediately or shortly after confirmation.
  • Critics noted investor ties—Founders Fund is an investor in Persona—and flagged potential misuse or government access, claims Persona denies for ICE/DHS contracts.
  • Researchers reported possible workarounds to Persona checks and highlighted an exposed domain referencing an “openai-watchlistdb”; OpenAI declined immediate comment.

Background

Online safety rules and national regulations have driven platforms to adopt more aggressive age verification. Australia’s under-16 social media restrictions and the UK’s Online Safety Act (OSA) pushed services such as Discord to develop mechanisms that can reliably distinguish minors from adults. Those laws require not only blocking minors from adult content but also preventing adults intent on contacting minors, which raises the bar for verification accuracy and evidence handling.

Age-assurance vendors operate across a spectrum of approaches—from nonbiometric signals to photo-based checks and document verification. Financial-services identity vendors like Persona entered this space because their fraud-detection tools provide a ready technical stack for verifying identity at scale. But companies with roots in financial verification often retain records for compliance and audit purposes, which makes data-retention practices a focal point for privacy advocates.

Main event

Discord rolled out a policy change that would default many users to teen experiences until their ages were verified, and announced plans to use a mix of methods including AI-based video selfies and behavioral signals to reduce reliance on document checks. After the announcement, an archived Discord FAQ revealed a UK experiment that named Persona as an age-assurance vendor and said submitted information would be stored for up to seven days before deletion—language Discord later removed.

The removed notice stoked user anger because Discord had recently faced a breach at a former partner that exposed about 70,000 government IDs. Users questioned why Discord would involve a vendor not listed publicly on its partner pages and why the deletion timeline appeared longer than the company’s public assurances that IDs are deleted “quickly” or “immediately” after verification.

Discord responded to requests for comment by saying the test covered only a small subset of UK users, lasted under a month, has concluded, and that Persona is no longer an active vendor. Persona’s CEO Rick Song and COO Christie Kim separately told reporters that verification data for confirmed users was deleted immediately, and that Persona has no active contracts with federal agencies such as DHS or ICE.

Security researchers and independent outlets then inspected Persona’s publicly exposed assets. Reporting and researcher posts identified an uncompressed frontend exposed on a publicly accessible server with 2,456 files, and a domain labeled in reporting as “openai-watchlistdb.withpersona.com” connected to watchlist queries—raising questions about what systems Persona queries during verifications and whether internal or third-party watchlists are used.

Analysis & implications

The episode underscores the trade-offs platforms face: regulators demand robust checks to protect minors, while users expect minimal collection and tight controls on sensitive identifiers. Discord’s reliance on a mix of AI estimates, behavior signals, and selective document checks reflects an attempt to balance those pressures, but any ambiguity about retention or who processes data magnifies risk and erodes trust.

From a compliance perspective, vendors with financial-services pedigrees often retain records for audit and anti-fraud purposes. That retention can be lawful and operationally defensible, yet it increases the attractiveness of the data to attackers. Given Discord’s earlier exposure of 70,000 IDs through a third-party incident, any future retention—even short-term—can trigger legitimate alarm among users and regulators.

Technically, the report of 2,456 publicly accessible frontend files is significant: exposed code can reveal implementation details and potential attack vectors, and it can enable researchers (and bad actors) to discover workarounds. Even when firms claim no government contracts or no linkage to law enforcement, discovery of artifacts labeled for government use or for “watchlist” querying invites intense scrutiny and political concern, especially when investor profiles include figures like Peter Thiel.

Politically, the controversy may chill platform experimentation and push firms to adopt more transparent vendor lists and retention policies. Regulators in the UK and elsewhere are likely to press for clearer public notices about pilots, explicit vendor disclosures, and independent audits of data handling—requirements that could slow deployments but increase public confidence if implemented well.

Comparison & data

Item Value
Previously exposed IDs (third-party breach) ~70,000
Persona frontend files found public 2,456 files
Discord Persona test duration (company statement) Less than one month
Archived FAQ retention note Up to 7 days

The data above highlights why even short retention windows matter: tens of thousands of IDs were exposed in a prior incident, and the discovery of large numbers of public files can be a separate source of risk. Platforms and vendors should explain not only retention durations but the technical safeguards used during the retention interval.

Reactions & quotes

Discord and Persona offered public reassurances while critics and researchers continued to probe. Below are abbreviated official lines placed in context.

“IDs shared during appeals are deleted quickly—in most cases, immediately after age confirmation.”

Savannah Badalich, Discord (statement to The Verge)

Discord emphasized speedy deletion as routine practice, but the archived FAQ wording and the existence of public frontend files undercut the clarity of that message for many users.

“All the data of verified individuals involved in Discord’s test was deleted immediately upon verification.”

Rick Song, CEO of Persona (statement to Ars Technica)

Persona’s CEO offered a categorical deletion claim for verified users; researchers and critics have pressed for independently verifiable audits rather than company assurances alone.

“We are not partnered with federal agencies including DHS or ICE.”

Christie Kim, COO of Persona (company email)

Persona’s COO directly denied agency partnerships and said the company would make engagements public if they proceed; skeptics nonetheless called for clearer public disclosures about investor roles and governance.

Unconfirmed

  • Extent of UK user coverage: Discord says the test involved a small number of users, but the precise count and selection criteria have not been published publicly.
  • Nature of the exposed “openai-watchlistdb” domain and any underlying data sharing: reporting identified the domain but full technical details and scope of data queries remain unverified.
  • Whether any deleted records were ever backed up or cached in third-party logs during the experiment remains unconfirmed without an independent audit.

Bottom line

The incident illustrates how fragile trust can be when platforms experiment with sensitive identity systems. Even short, limited tests can provoke outsized alarm if users perceive ambiguity about retention, vendor roles, or potential government linkages—particularly after a major ID breach affecting roughly 70,000 accounts.

For platforms and vendors, the practical takeaway is clear: pilots should be accompanied by transparent, contemporaneous disclosures (who is involved, how long data is held, what is deleted, and what audits exist). Regulators may respond by demanding stronger vendor disclosure, shorter retention defaults, and independent verification—measures that could restore confidence but will also raise operational costs.

Sources

  • Ars Technica — investigative reporting and timeline of the Discord–Persona experiment (news outlet)
  • Persona (withpersona.com) — company website and public statements (company/official)
  • Discord — company announcements and support/FAQ pages (company/official)
  • OpenAI — corporate site (company/official; no immediate comment on reported items)

Leave a Comment