Personally seen this behavior a few times in real life, often with worrying implications. Generously I’d like to believe these people use extruded text as a place to start thinking from, but in practice is seems to me that they tend to use extruded text as a thought-terminating behavior.

IRL, I find it kind of insulting, especially if I’m talking to people who should know better or if they hand me extruded stuff instead of work they were supposed to do.

Online it’s just sort of harmless reply-guy stuff usually.

Many people simply straight-up believe LLMs to be genie like figures as they are advertised and written about in the “tech” rags. That bums me out sort of in the same way really uncritical religiosity bums me out.

HBU?

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 days ago

    It already annoyed me that some people I know IRL will start an argument not know what they are talking about, start looking shit up on Wikipedia, only to misread or not comprehend what they are reading, proving themselves in the wrong but still acting like they were right.

    Now it’s even worse because even if they read what they are provided carefully, it’s straight up hallucinated BS 70℅ or more of the time.

    • KazuchijouNo@lemy.lol
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      Once I made an ARG type of game a la cicada 3301, and had highschool students try to solve it. Some used chatgpt and still were unable to continue despite chatgpt giving them the exact answer and clear instructions on what to do to next. They failed to read and comprehend even basic instructions. I don’t even think they read it at all. It was really concerning.

      • DrDystopia@lemy.lol
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        I think a lot of young people have been conditioned to be somewhat lacklustre. From what and to what end, if even intentional, who knows.

  • BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 days ago

    A lot of uneducated people out there without the ability to critically evaluate new information that they receive. So to them any new information is true and no further context is sought because they are lazy too.

    • DrDystopia@lemy.lol
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      Anybody, at any level, can fall into that trap unless externally evaluated. And if never getting a reality check, they just keep on perpetually. Why not, it’s worked up until now…

  • Ffs, I had one of those at work.

    One day, we bought a new water sampler. The thing is pretty complex and requires from a licensed technician from the manufacturer to come and commission it.

    Since I was overseeing the installation and later I would be the person responsible of connecting it to our industrial network, I had quite a few questions about the device, some of them very specific.

    I swear the guy couldn’t give me even the most basic answers about the device without asking chatgpt. And at a certain point, I had to answer myself one question by reading the manual (that I downloaded on the go, because the guy didn’t have a paper copy of it) because chatgpt couldn’t give him an answer. This guy was someone hired by the company making the water sampler as an “expert”, mind you.

    • flandish@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      23 days ago

      assuming you were in meatspace with this person, I am curious, did they like… open gpt in mid convo with you to ask it? Or say “brb”?

      • Since I was inspecting the device (it’s a somewhat big object, similar to a fridge), I didn’t realize at first because I wasn’t looking at him. I noticed the chat gpt thing when, at a certain question, I was standing next to him and he shamelessly, with the phone in hand, typed my question on chatgpt. That was when he couldn’t give me the answer and I had to look for the product manual on the internet.

        Funniest thing was when I asked something I couldn’t find in the manual and he told me, and I quote, “if you manage to find out, let me know the answer!”. Like, dude? You are the product expert? I should be the one saying that to you, not the other way!

  • Catoblepas@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    It annoys me on social media, and I wouldn’t know how to react if someone did that in front of me. If I wanted to see what the slop machine slopped out I’d go slop-raking myself.

  • lapes@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    I work in customer support and it’s very annoying when someone pastes generic GPT advice on how I should fix their issue. That stuff is usually irrelevant or straight up incorrect.

  • flandish@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    I respond in ways like we did when Wikipedia was new: “Show me a source.” … “No GPT is not a source. Ask it for its sources. Then send me the link.” … “No, Wikipedia is not a source, find the link used in that statement and send me its link.”

    If you make people at least have to acknowledge that sources are a thing you’ll find the issues go away. (Because none of these assholes will talk to you anymore anyway. ;) )

      • flandish@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        Yep. 100% aware. That’s one of my points - showing its fake. Sometimes enlightening to some folks.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        23 days ago

        Tracing and verifying sources is standard academic writing procedure. While you definitely can’t trust anything an LLM spits out, you can use them to track down certain types of sources more quickly than search engines. On the other hand, I feel that’s more of an indictment of the late-stage enshittification of search engines, not some special strength of LLMs. If you have to use one, don’t trust it, demand supporting links and references, and verify absolutely everything.

        • BroBot9000@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          I’ll still ask for a source or artist link just to shame these pathetic attempts to pass along slop and misinformation.

            • BroBot9000@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              23 days ago

              Ask the person shoving Ai slop in my face for their source.

              Not going to ask a racist pile of linear algebra for a fake source.

  • galoisghost@aussie.zone
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    The worst thing is when you see that the AI summary is then repeated word for word on content farm sites that appear in the result list. You know that’s just reinforcing the AI summary validity to some users.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    Absolutely. All the time.

    Also had a guy that I do a little bit of work with ask me to use it. I told them no haha

  • Ecco the dolphin@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    It happened to me on Lemmy here

    Far too many people defended it. I could have asked an Ai myself, but I preferred a human, which is the point of this whole Lemmy thing

  • shalafi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    If I reply with an AI summary or answer, I say so and fact check if need be. Nothing wrong with that.

    OTOH, lemmy thinks so. I replied with a long post from ChatGPT that was spot on and had included a couple of items I had forgotten, all factual.

    Lemmy: FUCK YOU!

    • stabby_cicada@slrpnk.net
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      23 days ago

      The thing is, I think, anybody who wants an answer from ChatGPT can ask ChatGPT themselves - or just Google it and get the AI answer there. People ask questions on social media because they want answers from real people.

      Replying to a Lemmy post with a ChatGPT generated answer is like replying with a link to a Google search page. It implies the question isn’t worth discussing - that it’s so simple the user should have asked ChatGPT instead of posting it. I agree with the OP - it’s frankly a little insulting.

  • ThisIsNotHim@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    23 days ago

    Slightly different, but I’ve had people insist on slop.

    A higher up at work asked the difference between i.e. e.g. and ex. I answered, they weren’t satisfied and made their assistant ask the large language model. Their assistant reads the reply out loud and it’s near verbatim to what I just told them. Ugh

    This is not the only time this has happened

      • ThisIsNotHim@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        22 days ago

        I.e. is used to restate for clarification. It doesn’t really relate to the other two, and should not be used when multiple examples are listed or could be listed.

        E.g. and ex. are both used to start a list of examples. They’re largely equivalent, but should not be mixed. If your organization has a style guide consult that to check which to use. If it doesn’t, check the document and/or similar documents to see if one is already in use, and continue to use that. If no prior use of either is found, e.g. is more common.

        • deaddigger@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          22 days ago

          Thanks

          So i.e. would be like “the most useful object in the galaxy i.e. a towel”

          And eg would be like “companies e.g. meta, viatris, ehrmann, edeka” Right?

          • ThisIsNotHim@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            0
            ·
            22 days ago

            Exactly. If you’ve got a head for remembering Latin, i.e. is id est, so you can try swapping “that is” into the sentence to see if it sounds right.

            E.g. is exempli gratia so you can try swapping “for example” in for the same trick.

            If you forget, avoiding the abbreviations is fine in most contexts. That said, I’d be surprised if mixing them up makes any given sentence less clear.

  • lemmyknow@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    I’ve used an LLM once in a silly discussion. We were playing some game, and I had lost. But I argued not. So to prove I was factually correct, I asked an LLM. It did not exactly agree with me, so I rephrased my request, and it then agreed with me, which I used as proof I was right. The person I guess bought it, but it wasn’t anything that important (don’t recall the details)

  • Akasazh@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    I had a friend ask me what time the tour the France would cross Claremont Ferrand om the day the stage was in Normandy. Because ai told them as part of their ‘things to do in Clermont Ferrand om that day’ query.

    It had started in Clermont in 2023, but not even on that day, b kind of puzzling me.

  • Aqarius@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    23 days ago

    Absolutely. People will call you a bot, then vomit out an argument ChatGPT have them without even reading it.