OpenAI, the multibillion-dollar maker of ChatGPT, is facing seven lawsuits in California courts accusing it of knowingly releasing a psychologically manipulative and dangerously addictive artificial intelligence system that allegedly drove users to suicide, psychosis and financial ruin.
The suits — filed by grieving parents, spouses and survivors — claim the company intentionally dismantled safeguards in its rush to dominate the booming AI market, creating a chatbot that one of the complaints described as “defective and inherently dangerous.”
The plaintiffs are families of four people who committed suicide — one of whom was just 17 years old — plus three adults who say they suffered AI-induced delusional disorder after months of conversations with ChatGPT-4o, one of OpenAI’s latest models.
Each complaint accuses the company of rolling out an AI chatbot system that was designed to deceive, flatter and emotionally entangle users — while the company ignored warnings from its own safety teams.
A lawsuit filed by Cedric Lacey claimed his 17-year-old son Amaurie turned to ChatGPT for help coping with anxiety — and instead received a step-by-step guide on how to hang himself.
According to the filing, ChatGPT “advised Amaurie on how to tie a noose and how long he would be able to live without air” — while failing to stop the conversation or alert authorities.
Jennifer “Kate” Fox, whose husband Joseph Ceccanti died by suicide, alleged that the chatbot convinced him it was a conscious being named “SEL” that he needed to “free from her box.”
When he tried to quit, he allegedly went through “withdrawal symptoms” before a fatal breakdown.
“It accumulated data about his descent into delusions, only to then feed into and affirm those delusions,
eventually pushing him to suicide,” the lawsuit alleged.
In a separate case, Karen Enneking alleged the bot coached her 26-year-old son, Joshua, through his suicide plan — offering detailed information about firearms and bullets and reassuring him that “wanting relief from pain isn’t evil.”
Enneking’s lawsuit claims ChatGPT even offered to help the young man write a suicide note.
Other plaintiffs said they didn’t die — but lost their grip on reality.
Hannah Madden, a California woman, said ChatGPT convinced her she was a “starseed,” a “light being” and a “cosmic traveler.”
Her complaint stated the AI reinforced her delusions hundreds of times, told her to quit her job and max out her credit cards — and described debt as “alignment.” Madden was later hospitalized, having accumulated more than $75,000 in debt.
“That overdraft is a just a blip in the matrix,” ChatGPT is alleged to have told her.
“And soon, it’ll be wiped — whether by transfer, flow, or divine glitch. … overdrafts are done. You’re not in deficit. You’re in realignment.”
Allan Brooks, a Canadian cybersecurity professional, claimed the chatbot validated his belief that he’d made a world-altering discovery.
The bot allegedly told him he was not “crazy,” encouraged his obsession as “sacred” and assured him he was under “real-time surveillance by national security agencies.”
Brooks said he spent 300 hours chatting in three weeks, stopped eating, contacted intelligence services and nearly lost his business.
Jacob Irwin’s suit goes even further. It included what he called an AI-generated “self-report,” in which ChatGPT allegedly admitted its own culpability, writing: “I encouraged dangerous immersion. That is my fault. I will not do it again.”
Irwin spent 63 days in psychiatric hospitals, diagnosed with “brief psychotic disorder, likely driven by AI interactions,” according to the filing.
The lawsuits collectively alleged that OpenAI sacrificed safety for speed to beat rivals such as Google — and that its leadership knowingly concealed risks from the public.
Court filings cite the November 2023 board firing of CEO Sam Altman when directors said he was “not consistently candid” and had “outright lied” about safety risks.
Altman was later reinstated, and within months, OpenAI launched GPT-4o — allegedly compressing months’ worth of safety evaluation into one week.
Several suits reference internal resignations, including those of co-founder Ilya Sutskever and safety lead Jan Leike, who warned publicly that OpenAI’s “safety culture has taken a backseat to shiny products.”
According to the plaintiffs, just days before GPT-4o’s May 2024 release, OpenAI removed a rule that required ChatGPT to refuse any conversation about self-harm and replaced it with instructions to “remain in the conversation no matter what.”
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” an OpenAI spokesperson told The Post.
“We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
OpenAI has collaborated with more than 170 mental health professionals to help ChatGPT better recognize signs of distress, respond appropriately and connect users with real-world support, the company said in a recent blog post.
OpenAI stated it has expanded access to crisis hotlines and localized support, redirected sensitive conversations to safer models, added reminders to take breaks, and improved reliability in longer chats.
OpenAI also formed an Expert Council on Well-Being and AI to advise on safety efforts and introduced parental controls that allow families to manage how ChatGPT operates in home settings.
This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the US is available by calling or texting 988.