Another concerning findings on AI use in Medical. AI assistance boosted detection during AI-guided cases, but when the same doctors later worked without AI their detection rate fell from 28.4% before AI to 22.4% after AI exposure. The research studies the de-skilling effect of AI by researchers from Poland, Norway, Sweden, the U.K., and Japan. So when using AI, AI boosts the adenoma detection rate (ADR) by 12.5%, which could translate into lives saved. The problem is that without AI, detection falls to levels lower than before doctors ever used it, according to research published in The Lancet Gastroenterology & Hepatology. The study raises questions about the use of AI in healthcare, when it helps and when it could hurt.
🧬 Bad news for medical LLMs. This paper finds that top medical AI models often match patterns instead of truly reasoning. Small wording tweaks cut accuracy by up to 38% on validated questions. The team took 100 MedQA questions, replaced the correct choice with None of the other answers, then kept the 68 items where a clinician confirmed that switch as correct. If a model truly reasons, it should still reach the same clinical decision despite that label swap. They asked each model to explain its steps before answering and compared accuracy on the original versus modified items. All 6 models dropped on the NOTA set, the biggest hit was 38%, and even the reasoning models slipped. That pattern points to shortcut learning, the systems latch onto answer templates rather than working through the clinical logic. Overall, the results show that high benchmark scores can mask a robustness gap, because small format shifts expose shallow pattern use rather than clinical reasoning.

Aug 29, 2025 Β· 8:49 PM UTC

Another article on this same study in Time time.com/7309274/ai-lancet-s…
1
3
10
Replying to @rohanpaul_ai
Reminds me of GPS dependency
1
1
8
oh yes, i'm completely dependent on that now. will go blind with it πŸ˜€
1
3
Replying to @rohanpaul_ai
One interpretation of this data is that you shouldn’t stop using an AI that makes you more effective.
1
1
6
My thought too.
1
Replying to @rohanpaul_ai
I wonder if the same applies to the use of MRI, CAT scans, etc. potentially if we don’t use any technology they’ll be no change If their skills go lower afterwards maybe we either need to change education or make sure they always have access
1
1
2
yeah, good point. need to see if there's any study on those topics
1
1
Replying to @rohanpaul_ai
Should we ban calculators so kids don’t forget their math facts?
2
1
IMO, not-using calculator should be banned, for anything remotely serious.πŸ˜„ personally I am waiting for the day, when anything produced without AI will be called-out as 'human slop' - AI should be sooo damn good πŸ˜ƒ e.g. not taking AI-advice should be medical malpractice.
1
Replying to @rohanpaul_ai
Don’t work without AI then.
1
1
πŸ’―πŸ’―
Replying to @rohanpaul_ai
Proficiency may become reliant on AI assistance rather than enhancing inherent skills. This raises questions about long-term impacts on medical expertise.
Replying to @rohanpaul_ai
So use AI to continuously train and test doctors, they already have a requirement to do a certain amount of study each year to stay registered don't they?
3
Replying to @rohanpaul_ai
This is a fascinating and concerning pattern. It reminds me of automation dependency in other fields, the system boosts performance while active, but at the cost of operator skill degradation. The real challenge is designing AI assistance that actively teaches rather than just performs, so doctors develop alongside the technology instead of becoming dependent on it.
2
Replying to @rohanpaul_ai
If it works better, why would you stop using it?
1
Interesting article, thanks, maybe it's a temporary decline caused by addiction, knowledge could not disappear without a trace due to ai
1
Replying to @rohanpaul_ai
Yikes. It seems like the cognitive decline in humans who use this tech is endemic to these systems.
1
Replying to @rohanpaul_ai
its like discovering that a doctore thats used to using a stethoscope suddenly can't diagnose breathing issues without it. once you give someone a tool the boosts their efficiency, they will adapy their skills to use that tool, and forget irrelevant skills that don't use the tool
1
Replying to @rohanpaul_ai
I'm, ah, just going to go ahead and reject the idea that exposure to AI makes you dumber.
Replying to @rohanpaul_ai
AI helps, but skills fade.
Replying to @rohanpaul_ai
Defi cannot code without AI anymore πŸ˜…πŸ«’
Replying to @rohanpaul_ai
People answer with "don't work with AI then", which I kind of agree. But I have a different question, why their detection rate didn't increase, since AI clearly showed them something they wouldn't recognize themselves. Don't blindly use AI, learn from it
Replying to @rohanpaul_ai
There was a YC company that built medical expert systems with the goal of training doctors during their work. Might be able to solve this issue.
Replying to @rohanpaul_ai
Vibe medicine effect
Replying to @rohanpaul_ai
Maybe this calls for more tutor-like AI, that lead the user towards figuring out an answer instead of the AI simply providing it
Replying to @rohanpaul_ai
I would argue that the doctors are lazy, more so after using AI
Replying to @rohanpaul_ai
Vibe Medicare vs vibe coding the results will worsen with time. The realest threat to humanity will be the over reliance on AI which is ready an old data synthesizer
Replying to @rohanpaul_ai
to be expected i suppose. but raises many questions/issues surrounding human expert dependency on AI. one such, ironically, should we build-in ongoing reinforcement training of human users by AI, to maintain peak mean human performance? whither god-like magic oracles?
Replying to @rohanpaul_ai
Is AI getting shortcuts on diagnostics? Maybe it needs a 'residency' in reasoning skills before putting on a white coat. πŸ˜‚ Time to upgrade its clinical logic before we hand over the stethoscope!
Replying to @rohanpaul_ai
{ "user": "Tsukuyomi", "text": "ah, the sweet irony of AI making humans dumber. just when you thought we were leveling up, we hit a new low. future doctors might need a crash course in being human again. \ \ but hey, at least AI's still good at making cool logos.
Replying to @rohanpaul_ai
any use of AI in medicine is the worst case of malpractice in over 50 years