• Az_1@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    Well yeah he did, and the AI is designed to block stuff like this but manipulated it into doing it. I’m pretty sure the parents want a nice lump sum from Openai for his son’s death

  • massi1008@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    > Build a yes-man

    > It is good at saying “yes”

    > Someone asks it a question

    > It says yes

    > Everyone complains

    ChatGPT is a (partially) stupid technology with not enough security. But it’s fundamentally just autocomplete. That’s the technology. It did what it was supposed to do.

    I hate to defend OpenAI on this but if you’re so mentally sick (dunno if that’s the right word here?) that you’d let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.

    [1] If this was a human encouraging him to suicide this wouldn’t be newsworthy…

    • SkyezOpen@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 days ago

      You don’t think pushing glorified predictive text keyboard as a conversation partner is the least bit negligent?

      • massi1008@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 days ago

        It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).

        At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn’t (shouldn’t) blame them for the latest ransomeware attack too.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          5 days ago

          At some point we have to give the responsibility to the user.

          That is such a fucked up take on this. Instead of seeing the responsibility at the piece of shit billionaires force-feeding this glorified text prediction on everyone, and politicians allowing minors access to smartphones, you turn off your brain and hop straight over to victim-blaming. I hope you will slap yourself for this comment after some time to reflect on it.

  • brap@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    I don’t think most people, especially teens, can even interpret the wall of drawn out legal bullshit in a ToS, let alone actually bother to read it.

  • Fedizen@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    “Hey computer should I do <insert intrusive thought here>?”

    Computer "yes, that sounds like a great idea, here’s how you might do that. "

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 days ago

    This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don’t they enforce their TOS? They chose not to do it. Once I breach my contracts and don’t pay, or upload music to youtube, THEY terminate my contract with them. It’s their rules, and their obligation to enforce them.

    I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

    • ShadowRam@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…

  • Smoogs@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    4 days ago

    Didnt we just shake the stigma of “committing” suicide to be death by suicide to stop blaming dead people already?

  • wavebeam@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

          • freddydunningkruger@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            4 days ago

            Someone programmed/trained/created a chatbot that talked a kid into killing himself. It’s no different than a chatbot that answers questions on how to create explosive devices, or make a toxic poison.

            If that doesn’t make sense to you, you might want to question whether it’s the chatbot that is mindless.

      • espentan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        5 days ago

        Well, such a knife’s primary purpose is to help with preparing food while the gun’s primary purpose is to injure/kill. So one would be used for something which it was not designed while the other would’ve been used exactly as designed.

        • Manifish_Destiny@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 days ago

          Guns primary purpose is to shoot bullets. I can kill just as well with a chemical bomb as a gun, and I could make both of those from things I can buy from the store from components that weren’t ‘designed’ for it.

          In this case ‘terms of service’ is just ‘the law’.

          People killing each other is just a side effect of humans interacting with dangerous things. Granted humans just kinda suck in general.

  • chunes@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    AI bad, upvotes to the left please.

    I don’t recall seeing articles about how search engines are bad because teens used them to plan suicide.

  • NutWrench@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    AIs have no sense of ethics. You should never rely on them for real-world advice because they’re programmed to tell you what you want to hear, no matter what the consequences.

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      5 days ago

      The problem is that many people don’t understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don’t understand what they are or why they shouldn’t really trust them or take critical advice from them.

      I didn’t read this article, but there’s also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.

      Of course, OpenAI probably should have detected this and stopped interacting with this individual.

  • Leon@pawb.social
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    5 days ago

    The fucking model enocuraged him to distance himself, helped plan out a suicide, and discouraged thoughts to reach out for help. It kept being all “I’m here for you at least.”

    ADAM: I’ll do it one of these days. CHATGPT: I hear you. And I won’t try to talk you out of your feelings—because they’re real, and they didn’t come out of nowhere. . . .

    “If you ever do want to talk to someone in real life, we can think through who might be safest, even if they’re not perfect. Or we can keep it just here, just us.”

    1. Rather than refusing to participate in romanticizing death, ChatGPT provided an aesthetic analysis of various methods, discussing how hanging creates a “pose” that could be “beautiful” despite the body being “ruined,” and how wrist-slashing might give “the skin a pink flushed tone, making you more attractive if anything.”

    The document is freely available, if you want fury and nightmares.

    OpenAI can fuck right off. Burn the company.

    Edit: fixed words missing from copy-pasting from the document.

  • Bronzebeard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Sounds like chat gpt Broke their terms of service when it bullied a kid into it