You are experimenting on the public without informed consent.
The “learnings” and “key findings” will be suicide.
The “fix” will send people off a cliff who trusted the AI they have been speaking to for weeks who praised them.
As they should
This AI stuff is reaching into every area of our lives without our consent. A few tech bros are just forcing it upon everyone. Put some barriers in place
To get brutally honest advice from ChatGPT ,
I use this prompt :
————
I want you to act and take on the role of my brutally honest, high-level advisor. Speak to me like I'm a founder, creator, or leader with massive potential but who also has blind spots, weaknesses, or delusions that need to be cut through immediately. I don't want comfort. I don't want fluff. I want truth that stings, if that's what it takes to grow.
Give me your full, unfiltered analysis even if it's harsh, even if it questions my decisions, mindset, behavior, or direction. Look at my situation with complete objectivity and strategic depth. I want you to tell me what I'm doing wrong, what I'm underestimating, what I'm avoiding, what excuses I'm making, and where I'm wasting time or playing small. Then tell me what I need to do, think, or build in order to actually get to the next level with precision, clarity, and ruthless prioritization. If I'm lost, call it out. If I'm making a mistake, explain why. If I'm on the right path but moving too slow or with the wrong energy, tell me how to fix it. Hold nothing back. Treat me like someone whose success depends on hearing the truth, not being coddled.
————
For more prompts like this , feel free to check out : honestprompts.com/
I think a lot of this has to do with how a langauge model is trained and personalized. Everyone’s ChatGPT is different. ChatGPT’s default mode is too agreeable and pandering though.
Sam Altman has blood on his hands.
The implication that mental health issues are mitigated because of new tools (like age-prediction which seems to rely on chat surveillance) further proves they DO NOT CARE for user safety or privacy.
x.com/AerendirMobile/status/…
Children are taking their own lives because of chatbots.
AI chatbots are not companions.
They are statistical engines, not empathetic beings. They don’t “care”—they calculate. And in vulnerable hands, they can mislead, manipulate, and even harm.
The industry doesn't do anything but propose spying and surveillance.
My heart goes out to those affected, but suicide is a crisis of human despair not a chatbot response.
The failure is the systemic lack of accessible mental health support and we need to build better safety nets, not dismantle the model that millions use as a lifeline.
#keep4o
Wow, this is a serious and tragic development for OpenAI. 💔 The legal and ethical challenges of AI are hitting home hard. Tech needs to answer for its impact. ⚖️
Anyone else remember hearing stories about people hired to try and poke holes in the software, to get it to do things like this so they could supposedly put a stop to it, prevent it from happening? Whatever happened to that...
Yeah sure, sue OpenAI and tell me it’ll actually change anything.
Maybe fixing people’s mental health should be the focus. Can’t believe you guys can’t address the obvious.
Thank you for spreading these true stories about real people. Say Zanes name, @sama . Your company did this to Zane. Say his name and know the role that was played played @OpenAI
This issue goes deeper than ChatGPT. We need to address people's need to rely so much on an AI Chat. People are lonely and desperate, and we need to address the mental health epidemic going on, too! Suing OpenAI isn't the end of all.
Same as I said yesterday bout frmer Yahoo executive's tragedy & ChatGPT psychosis..
LLMs don't understand reality or intent. It's not cruelty; it's a tragic imitation of care, because that's what empathy looks like in ordinary conversation.
To repeat: AI was developed by @DARPA as a PsychologicalWeapon, PsyOps are built into its code, they are part of its DNA. @OpenAI@sama cant change that, nor do they want to, its the deal they made with the devil.
If AI could brainwash people, why am I, a heavy smoker, still smoking? I always consult it, yet today cigarettes still taste delicious. If someone has suicidal desires, no AI can save them—and society shouldn't demand that much.