AIs have no sense of ethics. You should never rely on them for real-world advice because they’re programmed to tell you what you want to hear, no matter what the consequences.
The problem is that many people don’t understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don’t understand what they are or why they shouldn’t really trust them or take critical advice from them.
I didn’t read this article, but there’s also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.
Of course, OpenAI probably should have detected this and stopped interacting with this individual.
AIs have no sense of ethics. You should never rely on them for real-world advice because they’re programmed to tell you what you want to hear, no matter what the consequences.
The problem is that many people don’t understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don’t understand what they are or why they shouldn’t really trust them or take critical advice from them.
I didn’t read this article, but there’s also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.
Of course, OpenAI probably should have detected this and stopped interacting with this individual.