Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.
ChatGPT Health — OpenAI’s new health-focused chatbot — frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.
In the study, researchers tested ChatGPT Health’s ability to triage, or assess the severity of, medical cases based on real-life scenarios.
Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don’t provide reliable medical advice.



I’m acutely aware that it’s computer software, however, LLMs are unique in that they have what you’re calling “randomness”. This randomness is not entirely predicitible, and the results are non-deterministic. The fact that they’re mathematical models doesn’t really matter because of the added “randomness”.
You can ask the same exact question in two different sessions and get different results. I didn’t mean to ask twice in a row, I thought that was clear.
If you use the same random data source the results are deterministic. Same thing with user inputs/timing of them.
I don’t know what else to say, because you can literally test this yourself and get non-deterministic results.