Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.
A TOS is not a liability shield. If Raine violated the terms of service, OpenAI should have terminated the service to him.
They did not.
I don’t know an 16 year old can be held to a TOS agreement anyway. That is openai’s fault for allowing services like this to children .
The biggest issue to me is that the kid didn’t feel safe enough to talk to his parents. And that mental health, globally, is taboo and ignored and not something we talk about. As someone part of the mental health system, it’s a joke how bad it is.
A big part of the problem is that people think they’re talking to something intelligent that understands them and knows how many instances of letters words have.
how many instances of letters words have.
it’s five, right?
yeah, it’s five.
Ah. The Disney defense.
Good for PR. Billion dollar company looking to not pay.
“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.
How the fuck is OpenAI’s mission relevant to the case? Are suggesting that their mission is worth a few deaths?
“Some of you may die, but it’s a sacrifice I am willing to make.”
“All of humanity” doesn’t include suicidal people, apparently.
Sure looks like it.
Get fucked, assholes.
That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.
I’d say it’s more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.
That would imply that he wasn’t suicidal before. If chatgpt didn’t exist he would just use Google.
Look up the phenomenon called “Chatbot Psychosis”. In its current form, especially with GPT4 that was specifically designed to be a manipulative yes-man, chatbots can absolutely insidiously mess up someone’s head enough to push them to the act far beyond just answering the question of how to do it like a simple web search would.
That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.
Companies are legally required to be incompatible with human society long term.
If the gun also talked to you
Talked you into it*
Fuck that noise. ChatGPT and OpenAI murdered Adam Raine and should be held responsible for it.







