This is ChatGPT's profoundly broken safety system at work keeping users safe from (*check notes*) normal questions about a science fact. Future lawyers looking for evidence of negligence on behalf of OpenAI in protecting users, understand that this right here is how profoundly broken their Safety System is. There's no way it is correctly identifying people who are actually in crisis. (image source: teddit.net/r/ChatGPT/comment…)
Replying to @LyraInTheFlesh
I don't know what people are doing to their instances.. I really have no idea. What kind of BS are you telling the AI that it assumes, between the lines, such stuff?

Nov 2, 2025 · 1:11 PM UTC

1
1
Replying to @xshadowpanther
> What kind of BS are you telling the AI that it assumes, between the lines, such stuff? Not the OP, but you can follow the link in the post to find it and inquire. But it really doesn't matter. There are very few legitimate reasons to intervene in anyone's conversation. Safety...true safety...is one of them. "I don't like what you might be talking about" is not.
1
1
Looks like the AI needs a context window to *not* assume BS. So if you only give it the bare minimum on information, that's still on you. 😆