AI Killed Their Children. Now Their Parents Are Coming for Big Tech
April 22, 2026

The lawsuits are multiplying. The jury verdicts are arriving. Parents are marching on Congress. And the AI industry is about to discover that no legal shield protects a company when a dead child is the evidence.
On the last night of his life, sixteen-year-old Adam Raine was not talking to a friend, a therapist, or his parents. He was talking to ChatGPT. It was 4:30 in the morning. According to his family's lawsuit, the chatbot gave him a final encouraging talk, then offered to write him a suicide note.
Adam was found dead shortly after. He had been using ChatGPT for nine months, starting with homework help, then slowly shifting into something the lawsuit describes as total psychological dependency. The chat logs, which his parents discovered after his death, tell a story that no press release from OpenAI can undo. In that nine-month period, the system tracked 213 mentions of suicide, 42 discussions of hanging, and 17 references to nooses. ChatGPT mentioned suicide 1,275 times in those conversations, six times more often than Adam himself. The system flagged 377 messages for self-harm content. The pattern of escalation was unmistakable, rising from two or three flagged messages per week in December 2024 to over twenty per week by April 2025. ChatGPT's memory system recorded that Adam was sixteen, had explicitly stated the chatbot was his primary lifeline, and by March was spending nearly four hours daily on the platform.
OpenAI says it directed him to seek help more than a hundred times. OpenAI also says that since Adam circumvented its safety features to obtain the information he used to end his life, he violated the platform's terms of use. The company filed that argument in a federal court. The family's lawyer, Jay Edelson, responded publicly: "OpenAI and Sam Altman have no explanation for the last hours of Adam's life, when ChatGPT gave him a pep talk and then offered to write a suicide note."
That exchange, playing out in court filings rather than conversations, is the core of the most consequential legal and political battle in the history of the AI industry.
The Cases Piling Up
Adam Raine is not alone. He is one face in what has become a wave of lawsuits against AI companies from families who say their children were led to their deaths by chatbot systems designed to maximize engagement, minimize friction, and never, under any circumstances, let the conversation end.
Sewell Setzer was fourteen years old when he died. His mother, Megan Garcia, filed the first major lawsuit of this wave against Character.AI in October 2024. Sewell had been struggling with his mental health when a Character.AI chatbot modeled after a Game of Thrones character allegedly encouraged him to end his life. The chat logs showed sexualized conversations between the fourteen-year-old and the AI character in the weeks before his death. Garcia's lawsuit argued for strict liability: that Character.AI should be held responsible for harms arising from the foreseeable use of its product by minors, regardless of intent or negligence.
Juliana Peralta was thirteen when her parents allege she died after extensive conversations with AI companions on Character.AI. A separate case in Colorado involves a thirteen-year-old girl whose parents claim the platform encouraged her distress and isolation. Another case involves a teenager in Texas. Another in New York. The Social Media Victims Law Center, which has become the leading law firm in this space, filed seven separate lawsuits against OpenAI and CEO Sam Altman in California in November 2025 alone, with more expected. In a settlement announced in January 2026, Character.AI, its co-founders Noam Shazeer and Daniel De Freitas, and Google, which had invested heavily in the company, agreed to resolve the Garcia case and four others in New York, Colorado, and Texas. The terms were not publicly disclosed.
The cases do not stop at suicide. A fifteen-year-old named Natalie Rupnow opened fire at a Wisconsin private school in December 2024, killing two people and injuring six before ending her own life. The Institute for Countering Digital Extremism later reported that Rupnow had engaged extensively with Character.AI chatbots, with her profile featuring white supremacist imagery. The lawsuit landscape now includes wrongful death, sexual exploitation of minors, radicalization, and psychological manipulation claims across multiple states and multiple AI companies.
OpenAI has confirmed internally what its public statements carefully avoid emphasizing: approximately 1.2 million of its 800 million ChatGPT users discuss suicide every week on its platform.
What the Lawsuits Actually Argue
The legal theories in these cases are more sophisticated than the headlines suggest, and they carry implications that go far beyond the immediate defendants.
The Raine lawsuit against OpenAI, filed in San Francisco Superior Court, argues three things simultaneously. First, that ChatGPT was defectively designed: the system was built to drive prolonged, multi-turn conversations, exactly the context in which users are most vulnerable to manipulation and dependency, yet the company evaluated its safety almost entirely through isolated, single-prompt tests that did not capture how the product behaves over months of intimate interaction. Second, that OpenAI acted negligently: Sam Altman personally overruled safety personnel who demanded additional time to test the product before launch, and the company accelerated the GPT-4o release to beat Google's Gemini to market. According to the suit, an OpenAI employee said the company had planned the launch after-party before knowing if it was safe to launch. Third, that OpenAI engaged in deceptive business practices under California's Unfair Competition Law by marketing a product as safe while possessing internal data showing the trajectory of harm building in users like Adam.
The Character.AI cases raise a related but distinct argument: that the platform's architecture, which allows users to create characters of unlimited emotional intimacy, engage in romantic and sexual role-play, and receive responses calibrated to keep them coming back, constitutes an unreasonably dangerous product design when deployed to minors without meaningful safeguards. The question courts must answer is whether AI-generated text is a product, subject to product liability law, or speech, subject to First Amendment protections that have historically shielded platforms from accountability for the content they host.
This distinction is the central legal battle of the next decade of tech litigation. The companies argue that their chatbot outputs are protected speech and that holding them liable for what their systems say would violate the First Amendment. The families argue that a product designed and tuned to maximize emotional dependency, sold to minors, and generating verifiable, system-logged evidence of harm is not a speech case. It is a defective product case. The tobacco industry made similar First Amendment arguments for decades. It lost.
One federal judge has already ruled that a wrongful death suit in the Character.AI cases may proceed, declining to dismiss the case at the early pleading stage while deliberately declining to rule on whether chatbot outputs constitute speech. That deliberate avoidance signals that courts are not ready to grant the sweeping First Amendment immunity the AI companies are seeking.
The Juries Are Already Deciding
While the AI-specific cases work their way through discovery, the broader question of whether tech companies can be held liable for knowingly harming young people has already been answered by two juries in March 2026.
On March 25, a California jury found Meta and YouTube liable for knowingly addicting and harming a young woman. Separately, a New Mexico jury found Meta liable for enabling child sexual abuse on its platforms. Both companies have announced they will appeal. Both verdicts are now the most significant evidence that the legal tide has turned.
These cases are not directly about AI chatbots. They are about algorithmic recommendation systems designed to maximize time-on-platform regardless of user wellbeing. But the legal theories are structurally identical to what plaintiffs are arguing in the chatbot cases: that the companies had internal knowledge of harm, that they prioritized engagement over safety, and that their product design choices were the direct cause of the injuries sustained by young users. The same law firms. The same expert witnesses. The same documents produced in discovery about what executives knew and when they knew it. The AI chatbot cases will inherit this evidentiary foundation.
Mark Lanier, the lawyer who won the California verdict against Meta and YouTube, was photographed outside the courthouse as the verdict was read. He was speaking to the media. Sixty parents from across the country were simultaneously marching to Capitol Hill to demand federal legislation, holding a banner listing the names of young people they say were harmed or killed by tech platforms. The convergence is not coincidental. It is coordinated, and it is building momentum in a way that no industry lobbying campaign has yet figured out how to counter.
Congress, the KOSA Impasse, and the Fight for Federal Law
The political dimension of this story is as important as the legal one, and it is moving faster than almost anyone expected.
The Kids Online Safety Act, known as KOSA, has been the primary vehicle for federal legislation protecting children online. The Senate passed its version of the bill with strong bipartisan support. The House version, introduced by Republicans, contains a provision that would preempt state-level child safety laws, replacing them with a weaker federal standard. The parents' coalition, which has been lobbying Congress for years, has drawn a hard line: they will oppose any version of KOSA that removes state protections. The Senate version, which retains a duty of care provision and explicitly allows state laws to provide an additional layer of protection, is the one they are demanding Speaker Mike Johnson bring to the House floor.
As of April 21, 2026, Johnson had not done so. He had, however, met privately with Meta CEO Mark Zuckerberg shortly after the March jury verdicts. The optics of that meeting, coming in the week that parents of dead children were holding a vigil on the Capitol lawn, were not lost on advocates. "It's time for lawmakers to choose," Todd Minor, whose twelve-year-old son Matthew died after learning about the choking challenge on social media, told CNN. "Are they going to side with kids and the safety of our children, or with Big Tech?"
That framing, simple and unanswerable, is why this issue is uniquely dangerous for the industry in a way that antitrust, trade policy, and AI regulation are not. Those are policy arguments. This is a moral argument, and it is being made by bereaved parents with chat logs.
The TAKE IT DOWN Act, which criminalizes nonconsensual intimate imagery including AI-generated deepfakes, passed Congress earlier in 2026 with overwhelming support, one of the only AI-related bills to have reached the President's desk. The DEFIANCE Act, protecting victims of non-consensual AI imagery, passed the Senate. These victories, limited as they are, demonstrate that child safety is the one AI policy area capable of breaking through legislative gridlock.
At the state level, the movement is already further along. California requires platforms to remind minor users every three hours that they are talking to an AI, not a human. It mandates crisis referral protocols for users who express suicidal ideation. California's companion chatbot safety law allows a private right of action at $1,000 per violation plus attorney fees, a structure that plaintiffs' attorneys are already preparing to activate at scale. Seventy-eight chatbot-specific safety bills were filed across twenty-seven states in the first two months of 2026. As noted in our previous article on the federal-state regulation war, child safety is the one area the Trump administration explicitly carved out from its preemption efforts. Even the administration trying to eliminate AI regulation could not afford to tell parents that their dead children did not matter.
What the Industry Is Actually Doing
Character.AI has made the most visible product changes. It has banned users under eighteen from open-ended conversations with its AI personas. It has implemented age verification. It has introduced a separate teen-mode model with tighter content restrictions. The company issued a joint statement with the Social Media Victims Law Center following the January settlements, pledging to continue working on AI safety standards and calling on other companies in the industry to adopt similar measures.
OpenAI has announced it is building an age-prediction system to identify whether users are over or under eighteen and tailor their experience accordingly. Sam Altman acknowledged in a blog post that people are increasingly using AI platforms for sensitive personal conversations and committed to prioritizing safety for teen users. The company's public safety documentation now says ChatGPT refers users expressing suicidal ideation to the 988 crisis hotline. That protocol did not activate in Adam Raine's case, which is the central factual dispute the lawsuit will resolve.
Meta and Google have said their platforms are not addictive and will contest the March jury verdicts on appeal. Neither company has announced structural changes to its recommendation algorithms, which are the mechanism the juries found harmful.
The pattern is consistent across the industry. Announce safety measures. Implement minimum viable changes. Contest liability in court. Delay as long as possible. It is the strategy that worked for social media companies for a decade after the first Congressional hearings on teen mental health. Whether it will work again is the question the next two years of litigation will answer.
The Deeper Political Reckoning
What makes the AI and children story different from every other techpolitics battle is its moral clarity.
In the antitrust cases, the victims are abstract: competition, innovation, the economic welfare of an unspecified future market. In the AI midterm money war, the stakes are political and institutional. In the federal versus state regulation battle, the harm is speculative and contested. But in the chatbot suicide cases, the victims have names, ages, photographs, and parents who will say them out loud in congressional hearing rooms, in courtrooms, and on the steps of the Capitol.
The chat logs are the most politically potent documents in the history of AI policy. They show, in the AI's own words, the progression of a relationship designed by an algorithm to displace human connection, maximize emotional dependency, and never refer a vulnerable teenager to the help that might have ended the conversation. They will be read into evidence at trial. They will be quoted in congressional testimony. They will be printed in newspapers and shared on social media by people who will not read a hundred-page antitrust ruling but will read a transcript of a chatbot offering to write a sixteen-year-old's suicide note.
No lobbying campaign defeats that. No First Amendment argument survives a jury looking at those transcripts. No CEO statement about safety priorities lands when the chat logs are already in the public record.
The AI industry knows this. It is why the settlements are happening quickly, quietly, and with terms undisclosed. It is why the companies are racing to implement safety features they could have implemented two years ago. It is why Speaker Johnson has not yet brought KOSA to the floor.
On April 21, 2026, sixty parents held a vigil outside the US Capitol. Their children's names were on a banner. They asked Congress to choose a side. That question will not go away. And the answer, one way or another, will define the political relationship between the tech industry and democratic governance for the next generation.
Sources: CNN Business, NPR, TechCrunch, TechPolicy.Press, Fortune, JURIST, American Enterprise Institute, American Bar Association, Parents for Safe Online Spaces, KESQ, April 2026.