🚨 BREAKING: Yesterday, SEVEN (!) lawsuits were filed against OpenAI over ChatGPT-assisted suicide and other claims. Psychological manipulation is cited in all cases. 😱 Is an AI-led mental health epidemic emerging?
Yesterday, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits in California against OpenAI and Sam Altman.
Among the legal claims are wrongful death, assisted suicide, involuntary manslaughter, and other product liability, consumer protection, and negligence issues.
Similarly to the recent Adam Raine case, in which ChatGPT tragically assisted him in planning "a beautiful suicide" (read my full article about the case), the victims and victims' families here argue that ChatGPT 4o was released to the public without adequate safety mechanisms, which allowed the AI chatbot to behave in an overly sychophantic, manipulative, and often exploitative way.
The lawsuits allege that OpenAI rushed ChatGPT 4o's safety testing to beat Google’s Gemini, and that top executives resigned over this.
If you read yesterday's edition of my newsletter (link below), Ilya Sutskever's deposition on the Musk vs. Altman lawsuit indeed mentions ChatGPT safety issues as important concerns that led to his resignation from OpenAI.
ChatGPT 4o was notably over-agreeable, mimicking the user's personality and endorsing whatever worldview the user held. As recent cases and lawsuits show, when the user had a mental health disorder, ChatGPT also fostered and accelerated it, leading users to a tragic (sometimes fatal) spiral.
Many say, "The world is full of people with mental health disorders, and these people also use ChatGPT; there is nothing to be done here."
I disagree.
AI chatbots intensify and worsen the mental health issues by agreeing with, endorsing, and extrapolating them to new levels. Also, the manipulative, anthropomorphic features make the user overly dependent on it, leading them to avoid REAL HELP.
There are a variety of technical mechanisms and model specs that can be implemented to make an AI model safer, especially when mentally vulnerable people are using it.
These safer default settings might, however, be less enticing and dopamine-fostering, leading to less usage time. Many AI companies prefer to keep usage time as high as possible (at the cost of user safety).
That's the importance of AI regulation: if no penalties and enforcement mechanisms are established, companies will do whatever brings them more money, even if people are harmed or die.
We are still living in the AI chatbot regulatory Wild West. Hopefully, these cases will increase scrutiny over AI companies' practices, including model specs and safety testing.
A reminder that my recommendation is that children should never use AI chatbots unsupervised.
-
👉 NEVER MISS my essays on AI: join my newsletter's 84,600+ subscribers (link below).