A digital aid drew the ire of the literary world after introducing a controversial feature that impersonates writers without their permission. Now they are demanding the last word.
Silicon Valley has always had a complicated relationship with writers. It thrives on their work, ideas, credibility and authority — just not the part where anyone thinks to ask first.
So when the writing assistant software Grammarly launched a feature called “Expert Review” — allowing users to receive editing feedback ventriloquised through living journalists, celebrated authors and award-winning academics including astronomer Carl Sagan, who died in 1996 and therefore could not be reached for comment — it inevitably courted the limelight. As well as a US$5 million ($6.4 million) class-action lawsuit.
Once relied upon to catch a rogue comma or a misspelled “calendar”, Grammarly — now operating under the banner of Superhuman — has spent years recasting itself as an all-purpose writing platform. There is an artificial intelligence (AI) chatbot that fields questions mid-draft; the Paraphraser for stylistic suggestions; the Humaniser that reshapes prose to mimic a particular flair; and the Grader that estimates how a document might fare as college coursework.
There is even a tool for flagging and tweaking phrases typically generated by large language models (LLMs), helping users avoid sounding conspicuously synthetic. Yet for all its fixation on polish and personalisation, the company treats the distinctive intellectual DNA of individuals as little more than another set of presets to toggle on or off.
See also: Dawn Zhu, Asia director of the gallery Thaddaeus Ropac
Platformer’s Casey Newton, Bloomberg’s Mark Gurman, The New York Times’ Kashmir Hill and The Atlantic’s Kaitlyn Tiffany each woke to discover they had been conscripted as unwitting participants in an online gimmick that delivered “insights” amounting to AI hallucinations draped in the borrowed prestige of their bylines. What presented itself as high level editorial consultation was, in practice, straightforward identity appropriation.
“[Grammarly] gave its models free rein to hallucinate plausible-sounding advice on their behalf and put it all behind a subscription. That’s a deliberate choice to monetise the identities of real people without involving them, and it sucks…
The bigger problem, though, is the one that’s still invisible: all the ways my work — and the work of every other writer — is being used right now by systems that are smart enough not to tell us about it,” Newton clapped back, rebuking the notion that “publicly available” content constitutes a free licence for identity theft.
See also: Calling the tune
Naturally, curiosity — or perhaps a morbid sense of due diligence — led the implicated writers to confront their automated selves. They tinkered with the “Expert Review” to determine whether Grammarly’s algorithms had genuinely captured their voice or were simply peddling a slick, high-tech caricature.
Joan Didion
When internet-culture reporter Tiffany tasked the programme with evaluating a passage from her 2022 non-fiction Everything I Need I Get From You (an ethnography of stan culture told through the lens of a devoted One Direction fan), the bot cheerfully declared it would elevate her opening sentence with a proposal inspired by Joan Didion’s hugely influential compilation of essays, The White Album. The recommended revision, in the end, boiled down to just starting the chapter with a quote from one of the young women Tiffany had written about.
The farce reached its apex when Benjamin Dreyer, former copy chief of Random House and a man whose very role embodies editorial rigour, decided to probe the system’s discernment, or lack thereof, by submitting blocks of lorem ipsum. Rather than recognising the dummy placeholder text for what it was, “Expert Review” pressed on with unblinking confidence, supplying commentary and attributing its critiques to the digital double of Stephen King.
Stephen King
For more lifestyle, arts and fashion trends, click here for Options Section
These experiments barely survived a news cycle before damage control kicked in. Grammarly CEO Shishir Mehrotra took to LinkedIn to perform the tech industry’s version of public penance, announcing its latest invention was being pulled offline after receiving “valid critical feedback” from writers who were, quite reasonably, concerned that a machine spoke in their name.
But if the apology was intended to signal real contrition, the framing implied otherwise. Mehrotra did not describe the feature as a failed exercise in impersonation but as an “agent designed to help users discover influential perspectives” and a way for experts to cultivate “deeper relationships with their fans”. In a remarkable feat of spin, Grammarly did not view its unauthorised scraping as a violation of rights but as a gift (read: grift) of exposure to those they were replacing.
Even in disabling the tool, the door was left deliberately ajar. “Expert Review” is not being deleted completely but “reimagined”, with promises that the next iteration will provide people “real control” over their representation. For those whose reputations are being sold at a US$12 monthly subscription without a cent of compensation, that distinction offers cold comfort.
The charges levelled at Grammarly are not an isolated skirmish. It is the latest front in a sprawling legal reckoning that the AI industry has spent years trying to outrun. The right of publicity, long the domain of celebrity endorsement disputes and bootleg merchandise, is being stress-tested in ways its architects never anticipated. The difference between inspiration and appropriation, always legally murky, grows murkier still when the latter is automated and scalable.
Elsewhere, the litigation landscape is widening. The New York Times sued OpenAI and Microsoft for copyright infringement. A coalition of established authors — including John Grisham, Jodi Picoult and George R R Martin — filed suits against the same entity for training on their novels without permission. Anthropic faced claims over song lyrics, and Meta over books. These cases share a common thread: the defendants built billion-dollar products on the labour of talents who were never paid or informed.
It would be convenient to assume Grammarly’s debacle as an outlier — the result of one organisation’s hubris, swiftly corrected and unlikely to recur. But make no mistake: This is another form of prestige laundering, in which the cultural authority accumulated over decades of skilled work is quietly transferred to a product that had no hand in building it. Consent should be the primary condition, not an afterthought.
Tech giants may finally have to learn a lesson every freshman English student knows by heart: you cannot cite a source you have not read, and you certainly cannot claim expertise on a life you have not lived. And frankly, if we wanted a vapid robot to tell us how to channel Joan Didion, we would just drink a bottle of gin and stare at a cactus — at least it wouldn’t charge us US$12 for the privilege.