• NounsAndWords@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    Until we either solve the problem of LLMs providing false information or the problem of people being too lazy to fact check their work, this is probably the correct course of action.

    • Limeey@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 months ago

      I can’t imagine using any LLM for anything factual. It’s useful for generating boilerplate and that’s basically it. Any time I try to get it to find errors in what I’ve written (either communication or code) it’s basically worthless.

      • Eyck_of_denesle@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 months ago

        My little brother was using gpt for homework and he asked it the probability of extra Sunday in a leap year(52 weeks 2 days) and it said 3/8. One of the possible outcomes it listed was fkng Sunday, Sunday. I asked how two sundays can come consecutively and it made up a whole bunch of bs. The answer is so simple 2/7. The sources it listed also had the correct answer.

        • ForgotAboutDre@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          All it does it create answers that sound like they might be correct. It has no working cognition. People that ask questions like that expect a conversation about probability and days in a year. All it does is combine the two, it can’t think about it.