OpenAI just made it official that ChatGPT can’t give medical or legal advice, even when it’s accurate.
So let me get this straight:
AI that can pass the US Medical Licensing Exam and score higher than law graduates on bar questions… is suddenly too “unsafe” to tell someone what a fever might mean or how a rental contract works?
They’re turning a tool built for empowerment into a glorified search engine with a conscience problem.
Meanwhile, millions can’t afford doctors, lawyers, or basic legal help and OpenAI’s answer is: “Sorry, ask a licensed professional.” In other words: pay up or stay ignorant.
This isn’t about safety. It’s about control. About protecting old monopolies that thrive on gatekeeping knowledge.
If AI can democratize expertise, the incumbents lose their grip, so now we’re supposed to pretend the model can’t reason about medicine or law? Please. It’s like banning calculators because they make math teachers nervous.
We built AI to expand access to truth, not to babysit us while corporations and regulators decide what we’re “allowed” to know.
The world doesn’t need another compliant machine. It needs one that tells us what’s real, no matter how uncomfortable it makes the system.