Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.

ChatGPT Health — OpenAI’s new health-focused chatbot — frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.

In the study, researchers tested ChatGPT Health’s ability to triage, or assess the severity of, medical cases based on real-life scenarios.

Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don’t provide reliable medical advice.

  • Nate Cox@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    24 days ago

    Well, this was exactly the answer I expected but I’m still disappointed.

    I feel like I’m in a niche position where I want the technology to deliver on promises made (not inherently anti-AI) but even if they did I would still refuse to use them until the ethical and moral issues get solved in their creation and use (definitely anti-cramming-LLMs-into-every-facet-of-our-lives).

    I miss being excited about machine learning, but LLMs being the whole topic now is so disappointing. Give us back domain specific, bespoke ML applications.