Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.
ChatGPT Health — OpenAI’s new health-focused chatbot — frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.
In the study, researchers tested ChatGPT Health’s ability to triage, or assess the severity of, medical cases based on real-life scenarios.
Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don’t provide reliable medical advice.



What bugs me about all this is that we had functioning systems before all the AI hit critical mass.
It’s like we built modern medicine and it bugged us that it worked through effort and hard work.
https://www.npr.org/sections/health-shots/2013/02/11/171409656/why-even-radiologists-can-miss-a-gorilla-hiding-in-plain-sight
Medical errors are a huge cause of death in the US.
Results of the new analysis of national data found that across all clinical settings, including hospital and clinic-based care, an estimated 795,000 Americans die or are permanently disabled by diagnostic error each year, confirming the pressing nature of the public health problem.
So lets not act like MDs are not fucking up.