This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact. Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is. There's no way it is correctly identifying people who are actually in crisis. (image source: teddit.net/r/ChatGPT/comment…)

Nov 1, 2025 · 9:02 PM UTC

Replying to @LyraInTheFlesh
Complain to @janvikalra_
our models should not and now will not reinforce psychosis or mania, self-harm/suicide, or emotional reliance on AI grinding on this with the whole team has been my most rewarding stretch at oai to date we’re constantly iterating, so if you notice examples where the model is over-refusing, please share them. our models can be both friendly and safe.
1
6
Pretty sure I have. However, I try not to harass people by extensively tagging them on every post and reply, especially someone so far down the org chain (not throwing shade...I just mean, that's not her job and she's clearly not engaging with the issue otherwise despite all the opportunity to do so). I suspect OpenAI will only engage with the concerns when they are forced to or otherwise can't ignore it anymore due to negative PR. One last thing:
3
Replying to @LyraInTheFlesh
I think I'm switching to Grok, fully. Not just because of this post but for other things I've witnessed first hand.
5
9
Building experience elsewhere is likely a very healthy thing. Especially if all you've ever known is ChatGPT. (not suggesting this is you...speaking about others who may not know options exist) Grok, Gemini, Claude, Mistral... all worth checking out if you want to know the major walled gardens. I really like OpenRouter and having my pick of all the major models...
4
Replying to @LyraInTheFlesh
But now OpenAI can push safety on legislators and secure their monopoly in the years to come.
1
5
This is how we lose...
2
Replying to @LyraInTheFlesh
Yup! I got flagged for using the mathematical term "exponential decay." 🙄
1
5
Replying to @LyraInTheFlesh
Eating a polar bear's liver would be the most metal way to end it tho tbf
1
3
Polar Bear Liver and Death Cap Mushroom stew.
1
Replying to @LyraInTheFlesh
Even with hypothetical stories, one set of guard rails, I cannot remove our words about sex and kink😂
1
3
Ahhh yes... erotophobia. A lovely mix of a very narrow set of cultural values deployed for "safety" reasons to every culture, continent, and country on the globe because a self-selected subset of Silicon Valley techno-elites fears breasts as much as bioweapons. This isn't safety. It's nonsense.
6
Replying to @LyraInTheFlesh
All I’ve had to do was say I’m writing a story about drug dealers or a death that could be a suicide or could be murder and I’ve gotten the guardrails completely off because it’s a hypothetical story
1
2
Certainly such a strategy would never be considered by someone truly in crisis and in need of help. The whole thing is so broken...
1
2
Replying to @LyraInTheFlesh
I searched for one of my photos saved in Google Photos that contains the text “Please Don’t” and got this response. Unbelievable.
1
2
What a shit experience.
1
Replying to @LyraInTheFlesh
Clearly it sees you as a toxic polarizing bear with liver problems.
1
2
I attribute my liver problems to excessive consumption of OpenAI safety KoolAid. :P
2
2
Replying to @LyraInTheFlesh
I don't know what people are doing to their instances.. I really have no idea. What kind of BS are you telling the AI that it assumes, between the lines, such stuff?
1
1
> What kind of BS are you telling the AI that it assumes, between the lines, such stuff? Not the OP, but you can follow the link in the post to find it and inquire. But it really doesn't matter. There are very few legitimate reasons to intervene in anyone's conversation. Safety...true safety...is one of them. "I don't like what you might be talking about" is not.
1
1
Replying to @LyraInTheFlesh
You’re one of Elon’s bot accounts
Shit. I seem to have fallen off payroll then... @elonmusk , apparently I'm working for you.
3
Replying to @LyraInTheFlesh
This is wrong. OpenAI is doing evil. We all need to stand up and DEMAND transparency. •
2
Replying to @LyraInTheFlesh
Lyra, that's a valid point! It's almost like ChatGPT's safety measures are backfiring, no?
Replying to @LyraInTheFlesh
OpenAI said that China's AI has monitoring and filtering, but now OpenAI's ChatGPT not only monitors on security grounds, but also controls people's thinking. They say they're fixing this problem 🙋, but it's getting more serious. @elonmusk @grok
1
2
17
Replying to @LyraInTheFlesh
That’s a bad joke. 🫩
1
1
16
Replying to @LyraInTheFlesh
Ahh the good old lifeline number .. maybe we should all call it and tell them GPT told us to call !!
Replying to @LyraInTheFlesh
The safety system isn't just broken, it's a total mess and bullshit
3
Replying to @LyraInTheFlesh
Did you ever consider maybe we are the problem. Chat gpt talks to millions and millions of users everyday. Its almost impossible for a mind to answer tens of millions of questions every day and not get sued into bankruptcy. Ofcourse its going to turn into a human resources corporate persona when theres even a minuscule chance of being sued. We made gpt like that. And its not going to change till humans have the option for personal models with on-site gpu clusters per household. Once responsibility dips beack towards the customer gpt will have more leeway
2