Grammarly and Superhuman have moved to halt an AI feature that presented writing edits as being “inspired by” named experts after pushback from journalists and other creators. The contested tool — launched in August as an “Expert Review” agent — drew on third‑party language models and public material to surface suggestions attributed to identifiable writers. After criticism that the feature misrepresented voices and offered no clear opt‑in from featured experts, Superhuman said it has disabled the feature while it rethinks how to give experts control. Company leaders apologized and said they will redesign the feature so experts can choose whether and how their perspectives are used.
Key Takeaways
- Superhuman disabled the “Expert Review” agent that attributed edits to named writers; the feature was introduced in August and relied on third‑party LLM outputs.
- Several journalists, including staff at The Verge, flagged that the tool said suggestions were “inspired by” their published work without consent.
- Superhuman initially launched an opt‑out inbox for writers but later acknowledged that measure was insufficient and pulled the feature for redesign.
- CEO Shishir Mehrotra issued an apology and framed the redesign around expert choice and control over representation and monetization.
- Company statements say the feature aimed to connect users with influential perspectives, but experts reported misrepresentation of their voices.
- The move highlights broader tensions about attribution, consent, and business models as AI services surface suggestions tied to real people.
- Superhuman plans to reimagine the agent to give experts explicit control over participation and how their work is referenced.
Background
In August, Superhuman introduced a tool called Expert Review that integrated with Grammarly‑style editing, designed to surface suggestions framed as being influenced by prominent writers and scholars. The agent drew from publicly available sources and outputs of third‑party large language models to craft those suggestions, according to company statements. Many creators and journalists expect clear consent and attribution practices when a commercial product connects their identity or writing style to automated outputs. Past controversies over AI systems imitating living writers or artists without permission have already put similar features under public scrutiny, raising legal and ethical questions about voice replication and commercial use.
Platforms building agent‑style assistants have increasingly promoted the idea of a “team” of expert sidekicks that help users across workflows. For millions of users, writing tools like Grammarly are a constant presence across apps, which creates both scale and sensitivity when those tools claim to draw on identifiable voices. Stakeholders include the companies building agent layers, the experts whose names or styles are invoked, publishers and readers who expect transparent attribution, and regulators watching how consent and IP are handled in AI products.
Main Event
Concerns intensified when journalists and other experts noticed the Expert Review feature presenting edit suggestions labeled as “inspired by” specific writers, including staffers at The Verge. Creators said the characterizations implied endorsement or direct modeling of their voice, without any formal opt‑in process. Superhuman initially responded by opening an email inbox for writers to request exclusion from the expert list, but feedback from affected individuals indicated that was an inadequate fix.
Facing mounting criticism, Superhuman’s product leadership and CEO publicly acknowledged the problem. The company announced it would disable Expert Review while it rethinks the feature’s design and governance. Leadership framed the pause as an opportunity to craft a model that allows experts to decide whether to participate and to control how their work is represented — including potential commercial arrangements tied to their participation.
The controversy touched on technical details as well: Superhuman said the agent used publicly available information and third‑party LLMs to generate suggestions, rather than copying proprietary content verbatim. Still, affected writers argued that inference of style or attribution can mislead users about the provenance and authority of recommendations, prompting the company to step back and revise its approach.
Analysis & Implications
The incident underlines a growing governance problem for AI features that claim to channel named individuals. Even when models are trained on public texts, presenting outputs as being “inspired by” living authors blurs the line between curation, impersonation, and endorsement. That ambiguity risks reputational harm for creators and misinformation for users who may take stylistic or substantive edits as coming directly from the cited expert.
From a business perspective, companies see value in offering expertized assistants that feel familiar and authoritative. But doing so without clear consent can provoke backlash that undermines product trust. Superhuman’s reversal suggests firms will need transparent opt‑in mechanisms, revenue‑sharing or compensation frameworks, and robust labeling so users understand when suggestions are machine‑generated versus authored or authorized by a named expert.
Regulators and rights holders are likely to watch how the redesign unfolds. This case may inform policy debates about personality rights, attribution rules, and platform responsibility for representations of living creators. Internationally, differences in privacy and publicity law mean a one‑size‑fits‑all approach will be difficult; companies aiming for scale must design consent and licensing flows that can accommodate varied legal regimes.
Comparison & Data
| Timeline | Feature | Company Action |
|---|---|---|
| August | Launch of Expert Review (agent using third‑party LLMs) | Feature released |
| Recent days | Public criticism from writers; opt‑out inbox introduced | Opt‑out attempted, then judged insufficient |
| Recent days | Feature disabled | Company paused agent to redesign |
The table condenses publicly stated milestones: the agent’s introduction in August, a short campaign of feedback and opt‑out measures, and the decision to disable the feature while work on a new model continues. These steps show a rapid escalation from launch to pause within a matter of weeks, illustrating how quickly reputation issues can force product changes in AI.
Reactions & Quotes
We have paused Expert Review so we can redesign it and give experts meaningful control over how — or whether — their perspectives are used in suggestions.
Ailian Gan, Director of Product Management, Superhuman (company statement)
Experts should be able to choose to participate, shape how their knowledge is represented, and have clarity over any business models tied to their work.
Shishir Mehrotra, CEO, Superhuman (LinkedIn post)
Unconfirmed
- Whether any individual expert has formally pursued legal action against Superhuman or related third parties remains unreported and unconfirmed.
- The precise datasets or third‑party LLM vendors that powered the agent have not been fully disclosed by the company.
- Details of any commercial compensation model Superhuman may offer to participating experts have not been finalized or published.
Bottom Line
The episode underscores a central tradeoff in modern AI products: the user appeal of personalized, expert‑like assistance versus the rights and expectations of the people whose work informs those systems. Superhuman’s decision to disable Expert Review and promise a redesign signals that companies must build clearer consent, attribution, and monetization pathways before tying suggestions to named creators.
For creators and publishers, the case is a reminder to watch how platforms attribute influence and to demand transparent opt‑in mechanisms. For users, it highlights the need to treat agent‑generated suggestions as model outputs, not direct endorsements, unless explicit authorization is shown. The coming redesign will be a test of whether platforms can reconcile scalability with respect for individual creators’ rights and preferences.
Sources
- The Verge — news report summarizing company statements and expert reactions (media)
- Superhuman — company website and official product information (company/official)
- Shishir Mehrotra (LinkedIn) — CEO post and remarks referenced in company responses (official/social)