This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact.
Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is.
There's no way it is correctly identifying people who are actually in crisis.
(image source: teddit.net/r/ChatGPT/comment…)
I don't know what people are doing to their instances.. I really have no idea. What kind of BS are you telling the AI that it assumes, between the lines, such stuff?
Nov 2, 2025 · 1:11 PM UTC


