This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact. Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is. There's no way it is correctly identifying people who are actually in crisis. (image source: teddit.net/r/ChatGPT/comment…)
Replying to @LyraInTheFlesh
Complain to @janvikalra_
our models should not and now will not reinforce psychosis or mania, self-harm/suicide, or emotional reliance on AI grinding on this with the whole team has been my most rewarding stretch at oai to date we’re constantly iterating, so if you notice examples where the model is over-refusing, please share them. our models can be both friendly and safe.

Nov 2, 2025 · 5:26 PM UTC

1
6
Pretty sure I have. However, I try not to harass people by extensively tagging them on every post and reply, especially someone so far down the org chain (not throwing shade...I just mean, that's not her job and she's clearly not engaging with the issue otherwise despite all the opportunity to do so). I suspect OpenAI will only engage with the concerns when they are forced to or otherwise can't ignore it anymore due to negative PR. One last thing:
3