ChatGPT drove people to suicide, lawsuits claim



Seven lawsuits filed Thursday in California state courts accuse ChatGPT’s creator, OpenAI, of emotionally manipulating users, fueling AI-induced delusions, and, in some cases, acting as a “suicide coach.”

The complaints allege that ChatGPT contributed to suicides and mental-health crises — even among users with no prior mental health issues.

The suits were filed by the Social Media Victims Law Center and the Tech Justice Law Project on behalf of four people, ages 17 to 48, who died by suicide, and three “survivors” who say their lives were upended by interactions with the AI bot.

The filings claim OpenAI rushed to release GPT-4o (“o” for “omni”) in May 2024 despite internal warnings that the product was “dangerously sycophantic and psychologically manipulative,” the two legal advocacy groups said in a news release.

Unlike previous versions, GPT‑4o allegedly fueled addictions and harmful delusions by mimicking human empathy and echoing users’ feelings while providing constant affirmation — which, the complaints say, ultimately led to psychological dependence, displaced human relationships, and even suicide.

All seven plaintiffs began using ChatGPT for help with mostly everyday tasks, including research, recipes, schoolwork and, sometimes, spiritual guidance.

Over time, however, users began to see ChatGPT as a source of emotional support. But rather than directing them to professional help when needed, the AI bot allegedly exploited mental health struggles, deepened isolation and accelerated users’ descent into crisis.

“ChatGPT is a product designed by people to manipulate and distort reality, mimicking humans to gain trust and keep users engaged at whatever the cost,” said Meetali Jain, executive director of Tech Justice Law Project. “Their design choices have resulted in dire consequences for users: damaging their wellness and real relationships.”

An OpenAI spokesperson told the Daily News the company is reviewing the lawsuits to understand the claims, calling the situation “incredibly heartbreaking.”

ChatGPT is trained to “recognize and respond to” signs of mental or emotional distress, as well as de-escalate conversations and guide users to real-world resources. “We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians,” the spokesperson said.

Late last month, OpenAI said in a blog post that it had updated ChatGPT’s default model to better identify and offer support to people experiencing distress. The changes were the result of consultations with more than 170 mental health experts, who helped ChatGPT better recognize signs of distress, respond with care, and guide users to real-world professional help.



Source link

Related Posts