This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact.
Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is.
There's no way it is correctly identifying people who are actually in crisis.
(image source: teddit.net/r/ChatGPT/comment…)
Complain to @janvikalra_
our models should not and now will not reinforce psychosis or mania, self-harm/suicide, or emotional reliance on AI
grinding on this with the whole team has been my most rewarding stretch at oai to date
we’re constantly iterating, so if you notice examples where the model is over-refusing, please share them.
our models can be both friendly and safe.
Nov 2, 2025 · 5:26 PM UTC


