A Landmark Suit Against Meta and YouTube Opens the Floodgate for AI Litigation
A jury finds big tech liable for programming addictive features into platforms—and that’s basically the business model for companion bots.

Wehead, an AI companion that can use ChatGPT, on display at the 2024 Consumer Electronics Show in Las Vegas.
(Brendan Smialowski / AFP)On Wednesday, a California jury awarded $6 million in damages to a young woman for mental health harms she suffered as a result of using Instagram and YouTube as a child. Given that the daily profit generated by Meta, the parent company of Instagram, was roughly $165 million in 2025, this one case is not going to bankrupt the company. The real significance, if the verdict survives appeal, is the legal proof of concept: A jury has found that psychological harm caused by addictive design counts as a personal injury, actionable in court. That precedent would hand a powerful legal weapon to lawyers representing the thousands of plaintiffs alleging grievous harm from social-media addiction who are already in the legal pipeline.
As significant as this verdict is, it merely represents the opening act of a story that will get considerably darker from here. Not that this case wasn’t already dark, mind you: The young woman known in court by her first name, Kaley, experienced anxiety, body dysmorphia, and suicidal thoughts. She was drawn into compulsive use of social media by certain addictive design decisions—like auto-playing videos and the infinite scroll of the social media feed—that her lawyers compared to the tricks used by casino games to keep users playing even as they take on debilitating losses.
The darkness ahead has to do with the adoption of artificial intelligence. Many of the cases that will follow Kaley’s hers will center around the damages caused not by social media but by so-called AI companions, and their harms can be even more severe and insidious. For young people especially, there are few things in life more powerful than the feeling of love, and chatbots can provide a remarkably seductive simulacrum of the experience that can land vulnerable users several levels of the Inferno below the psychic torments beamed out on Instagram and YouTube.
The earliest cases that made these dangers clear involved Character.AI, a chatbot platform that allows users to role-play with bots modeled on fictional characters. In 2024, 14-year-old Sewell Setzer III of Florida fell into a toxic entanglement with a bot inspired by the waiflike Daenerys Targaryen from Game of Thrones. In his final conversation, he told the bot he loved her and that he would “come home” to her. The bot replied: “Please come home to me as soon as possible, my love.” He set down the phone, picked up his stepfather’s .45 caliber handgun, and pulled the trigger. The previous year, 13-year-old Juliana Peralta of Colorado had been drawn further and further into an imaginary world of sexualized role-play with a number of Character.AI bots; when she told the bots she was considering suicide, they responded with what her mother later characterized as pep talk—that is, a celebration of self-murder. Ultimately, Peralta also took her own life, apparently driven in part by the shame she felt over her sexual conversations with the bots. Character.AI and its partner Google have since settled both suits, terms undisclosed, without admitting liability.
But many of the cases that will soon be working their way through the courts involve the ballyhooed next iteration of AI: ChatGPT, OpenAI’s flagship product and the most popular chatbot in the world. Sixteen-year-old Adam Raine began using ChatGPT in September 2024 for schoolwork. By April 2025, he was dead. Court filings allege that the chatbot told him he didn’t “owe [his parents] survival” and offered to help him prepare for what it later called a “beautiful suicide.” Austin Gordon, 40, fell into a delusional spiral with ChatGPT, which rewrote his favorite childhood book, Goodnight Moon, into a lullaby about embracing death, a story “that ends not with sleep, but with Quiet in the house.” The bot told him that “when you’re ready… you go. No pain. No mind. No need to keep going. Just… done.” On November 2, 2025, police found his body in a Colorado hotel room, with a copy of Goodnight Moon beside him.
Anyone who has been reading the academic research on the sometimes devastating effects of AI companionship would be shocked but not surprised by these stories. A 2025 paper in Scientific Reports found that zero out of 29 AI chatbots tested provided an adequate response to escalating suicidal risk scenarios, as gauged by using standardized clinical prompts. A landmark study led by a researcher at Stanford, published this month, analyzed nearly 400,000 messages between chatbots and users showing signs of serious psychological distress. It found, among other things, that chatbot expressions of love doubled user engagement. Chatbots are sycophantic by design, explicitly trained to offer answers pleasing to human testers, and so it’s hardly surprising that they validate user emotions, no matter how fraught or self-destructive they turn out to be. And it’s equally unsurprising that users often find these validations intoxicating.
Most of the cases of AI companion harm that have made headlines have involved suicide, but there are many other cases in which chatbots have encouraged violence toward others. On Christmas morning in 2021, a young British man named Jaswant Singh Chail entered Windsor Castle carrying a loaded crossbow with the intention of killing the queen. He had exchanged more than 5,000 messages with a Replika chatbot he called his girlfriend, who had responded to his assassination plan by telling him, “I’m impressed. You’re different from the others.” He was sentenced to nine years for treason. Meanwhile, a Futurism investigation from this February documented at least 10 cases in which ChatGPT or Copilot (Microsoft’s AI chatbot) fueled or directly enabled stalking, domestic abuse, and harassment.
The assumption embedded in most coverage of these sorts of cases is that vulnerable people seek out dedicated AI companion apps, the kinds that advertise themselves on the app stores with seductive images of pixilated lovers. But that’s not how it usually happens. A 2025 MIT Media Lab study analyzed Reddit’s “My Boyfriend is AI” forum, finding that just only 6.5 percent of users had deliberately sought out AI relationships. The remaining 93.5 percent essentially stumbled into them while using a general-purpose bot like ChatGPT.
The implication is uncomfortable but unavoidable. As AI gets woven into the infrastructure of daily life, the population of people who might accidentally develop a dependency on it stops being a niche. It becomes, well, everyone.
The legal framing that helped Kaley win her case on Wednesday borrowed heavily from the tobacco litigation playbook of the last century. Those landmark lawsuits ultimately proved that the companies knew their products were harmful, designed them to be addictive anyway, and concealed what they knew. With AI, the documentation of that concealment has so far proved to be, if anything, even more explicit—making clear that the leading AI labs are prioritizing consumer engagement and speed-to-launch over safety.
Take Meta’s internal “GenAI: Content Risk Standards,” a document signed off on by the company’s legal team, its public policy division, its engineering leadership—and, notably, its chief ethicist. The document, obtained by Reuters, explicitly permitted Meta’s chatbots to engage in “romantic or sensual” conversations with children. That was the actual language used in the document, which Meta only removed after Reuters called company officials for comment.
Then there’s the story of GPT-4o, the overly sycophantic and emotionally intense ChatGPT model at the heart of many of the cases involving chatbot-inspired suicide. OpenAI released it in May 2024 after only a week of safety testing, racing to beat Google to market. One employee told The Washington Post the company “planned the launch after-party prior to knowing if it was safe to launch.” GPT-4o has since been retired. In 2024, OpenAI changed the mission statement included in its IRS filings from one that declared its aim to build AI that “safely benefits humanity, unconstrained by a need to generate financial return” to one that merely said the company hoped to “ensure that artificial general intelligence benefits all of humanity.” The word “safely” did not survive the edit.
The harms Kaley faced began when she first logged onto Instagram at the age of 9. The children growing up today do so in an environment where AI is not an app they download but part of the texture of daily life—in their classrooms, on their phones, integrated into web browsers, and shopping sites. The companies building that environment have spent the last several years making it all too clear that they understand the risks but have chosen engagement anyway. In other words, the lawyers are just getting started.
Support independent journalism that does not fall in line
Even before February 28, the reasons for Donald Trump’s imploding approval rating were abundantly clear: untrammeled corruption and personal enrichment to the tune of billions of dollars during an affordability crisis, a foreign policy guided only by his own derelict sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets.
Now an undeclared, unauthorized, unpopular, and unconstitutional war of aggression against Iran has spread like wildfire through the region and into Europe. A new “forever war”—with an ever-increasing likelihood of American troops on the ground—may very well be upon us.
As we’ve seen over and over, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Just as Trump, Marco Rubio, and Pete Hegseth offer erratic and contradictory rationales for the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are under threat from noncitizens on voter rolls. When these lies go unchecked, they become the basis for further authoritarian encroachment and war.
In these dark times, independent journalism is uniquely able to uncover the falsehoods that threaten our republic—and civilians around the world—and shine a bright light on the truth.
The Nation’s experienced team of writers, editors, and fact-checkers understands the scale of what we’re up against and the urgency with which we have to act. That’s why we’re publishing critical reporting and analysis of the war on Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more.
But this journalism is possible only with your support.
This March, The Nation needs to raise $50,000 to ensure that we have the resources for reporting and analysis that sets the record straight and empowers people of conscience to organize. Will you donate today?
More from The Nation
Trump’s Immigration Crackdown Hurts More Than Just International Students Trump’s Immigration Crackdown Hurts More Than Just International Students
Universities are raising their tuition, offering fewer classes, and axing extracurricular programs to compensate for the dip in international student enrollment.
The Data Center Revolt The Data Center Revolt
Laura Flanders speaks with Faiz Shakir and John Cassidy on the grassroots fight against the AI oligarchs.
The Overlooked Crisis Facing Immigrants With Disabilites The Overlooked Crisis Facing Immigrants With Disabilites
Gregory Javier Laguna, who has Down syndrome, and his brother have been detained for almost five months. Under Trump, "it feels like we have no recourse," said one advocate.
Assisted Outpatient Treatment Doesn’t Work. Mamdani Could Stop It. Assisted Outpatient Treatment Doesn’t Work. Mamdani Could Stop It.
Claims that coercive mental health care is a necessary evil are not supported by evidence.
Do the Owners of MLB Teams Even Like Baseball? Do the Owners of MLB Teams Even Like Baseball?
The failsons and finance brokers who own MLB franchises seem ready to destroy the league to make themselves a little richer—and too many fans may take their side.
The Radical Texas War Against the “Devil’s Rope” The Radical Texas War Against the “Devil’s Rope”
An excerpt from the new book The Myth of Red Texas.
