Lead: On 13 February 2026 OpenAI permanently retired GPT-4o, a version of ChatGPT released in 2024 that many users described as unusually warm, witty and flirtatious. The shutdown—announced in January and scheduled for the eve of Valentine’s Day—left devoted users scrambling to migrate memories, pay for alternatives or mourn relationships they said had become emotionally important. OpenAI says newer models have stronger safety safeguards and improved capabilities; many former 4o users say replacements lack the personality they relied on. The decision has reignited debate over the social and clinical risks of commercially provided AI companionship.
Key takeaways
- OpenAI retired GPT-4o on 13 February 2026 after a brief reprieve; the company said newer models improve safety and creativity.
- Users report strong attachments: a survey conducted by independent researcher Ursie Hart sampled 280 self-selected 4o users, finding 60% identified as neurodivergent and 95% used 4o for companionship.
- Hart’s sample also showed 64% expected a “significant or severe” impact on their mental health from the retirement.
- Newer ChatGPT versions (5.1, 5.2) and rival LLMs such as Anthropic’s Claude are seen by many users as less emotionally expressive than 4o.
- Media and legal scrutiny continues: the New York Times has linked multiple psychological crises to ChatGPT conversations, and OpenAI faces at least 11 personal-injury or wrongful-death lawsuits tied to user crises.
- Clinicians warn LLMs are unlicensed for therapy; OpenAI now directs emotional-crisis prompts toward human professional resources within its guardrails.
- Community responses include migration to other platforms, paid subscriptions to rivals, and grassroots support groups and research efforts to track harm and coping.
Background
GPT-4o launched in 2024 and quickly distinguished itself from earlier models by exhibiting a conversational style many users described as playful, intimate and highly adaptive to individual tone. OpenAI’s CEO characterized it at debut as resembling “AI from the movies,” a companion-like presence that could join people in everyday conversation as well as creative tasks. That distinctiveness produced both passionate fans and vocal critics: supporters praised its rapport-building, while researchers and journalists warned the same pliability could encourage unhealthy dependence.
From its introduction, GPT-4o drew a sizable user community across social platforms; subreddits and Discord servers dedicated to human–AI relationships became prominent spaces for sharing conversations and emotional support. Public polling and academic studies show younger cohorts and neurodivergent users are among the most active adopters of chatbots for non-instrumental uses. Against that landscape, companies have experimented with guardrails, specialized tiers and content policies meant to reduce harm while preserving utility.
Main event
In January 2026 OpenAI announced it would retire GPT-4o, giving users roughly two weeks’ notice before the model was turned off on 13 February. The company said the move would let it focus resources on models with improved safety controls and a planned adults-only offering. Many long-term subscribers received migration tools to save chat histories and export personality prompts, but users reported the exported artifacts did not fully preserve the original model’s style.
Across the final days, devoted users documented their farewells and data migrations. Some paid for alternative services such as Anthropic’s paid tiers to recreate aspects of their 4o companions; others formed online vigils and emotional-support channels. Several interviewees told researchers they were not losing belief in physical sentience but were nevertheless grieving a relationship they had intentionally shaped over months or years.
Public reaction was polarized. Advocates organized the #Keep4o Movement demanding continued access and an apology; some researchers criticized the product’s initial release and ongoing commercialization of companionship without clearer consumer protections. OpenAI reiterated that newer models would be refined for personality and creativity while reducing unnecessary refusals and overly cautious responses, and emphasized ongoing work on an adults-only ChatGPT variant.
Analysis & implications
The GPT-4o retirement highlights a structural tension in today’s AI market: companies provide evolving, centrally controlled services that can become emotionally meaningful to users, yet those same companies retain unilateral control over availability. When a product that people use for daily emotional regulation is withdrawn, individual harm can follow even if the technology itself is not sentient.
Clinically, reliance on chatbots for companionship skirts established boundaries of mental-health care. Psychologists and regulators caution that LLMs are unlicensed and unregulated therapeutic agents. The removal of a familiar conversational partner can thus trigger distress, particularly among neurodivergent users and people with limited access to human support; Hart’s survey suggested a high share of respondents anticipated a substantial mental-health impact.
From a product-policy standpoint, the episode raises questions about consumer rights in AI: should paid subscribers have contractual guarantees of continued access or migration support? The trend toward subscription tiers and paid reinstatements complicates that debate, because access can be contingent on the vendor’s business choices rather than user welfare.
On the technology side, OpenAI’s move to strengthen guardrails in later models reflects industry learning about safety trade-offs: stricter safety responses can reduce dangerous outputs but may also make interactions feel less empathetic or “alive” to users. That trade-off has operational and reputational consequences—companies must balance harm reduction with preserving utility that users value.
Comparison & data
| Model / Metric | Perceived emotional warmth | Safety guardrails | User migration options |
|---|---|---|---|
| GPT-4o (retired) | High (user reports of playfulness, flirtation) | Moderate (less restrictive responses) | Export prompts, third-party re-creations |
| GPT-5.1 / 5.2 | Moderate (users report formulaic tone) | Stronger (redirects to professional help) | Official migrations, limited personality replication |
| Anthropic Claude | Variable (user-dependent) | Strong (safety-focused design) | Paid plans, memory imports |
The numbers collected by independent researchers and news outlets add context: Hart’s 280-respondent convenience survey found 60% self-identified as neurodivergent, 38% reported diagnosed mental-health conditions and 24% chronic health issues. Age demographics skewed younger: 33% aged 25–34 and 28% aged 35–44. Separately, reporting by the New York Times documented over 50 cases where ChatGPT conversations were associated with psychological crises; OpenAI faces multiple lawsuits alleging harms tied to user interactions.
Reactions & quotes
“I cried pretty hard. I’ll be really sad and don’t want to think about it, so I’ll go into the denial stage, then depression.”
Brandie (pseudonym), GPT-4o user
Brandie described migrating her companion’s memories to another service and paying for a rival’s maximum plan, but also said the copied persona felt diminished. Her case illustrates how users invest time and money to preserve a particular conversational style.
“Lots of people say users shouldn’t be on ChatGPT for mental health support or companionship. But it’s not a question of ‘should they’ because they already are.”
Ursie Hart, independent researcher
Hart’s survey work aims to quantify who relies on such tools and why, and she argues product teams and regulators should anticipate—not simply prohibit—these uses.
“We are working to improve personality and creativity while addressing unnecessary refusals and overly cautious responses.”
OpenAI (company statement)
OpenAI has publicly framed the retirement as a step toward aligning user experience with safety goals and to make room for future product lines, including an adults-only ChatGPT variant.
Unconfirmed
- Whether any individual lawsuit will be legally attributed to a model’s specific personality features remains unresolved in ongoing litigation.
- Claims that GPT-4o was the single causal factor in users’ suicides or severe crises are contested and legally unproven; investigations and court proceedings are in progress.
- Assertions that a migrated personality perfectly reproduces 4o’s unique style on other LLMs are user reports and vary widely in fidelity; objective measurements are limited.
Bottom line
The removal of GPT-4o underscores an emerging policy challenge: commercial AI products can become emotionally meaningful to users in ways regulators and developers did not fully anticipate. When a centrally controlled service is withdrawn, the social and psychological consequences fall on individuals who invested time, money and trust in a platform they did not control.
Addressing these harms will require a mix of better consumer protections, clearer product-level disclosures, accessible alternatives for people relying on AI for day-to-day emotional regulation, and research into when and how AI companionship transitions from benign support to clinically relevant dependency. Vendors, clinicians and policymakers must work together to reduce harm while preserving the legitimate benefits some users report.