Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Semi-obligatory thanks to @dgerard for starting this.)

  • skillissuer@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    6 days ago

    ai fan asks chempros about their use of lying boxes: majority opinion is that this shit is useless, leaks confidential information and is a massive legal liability https://www.reddit.com/r/Chempros/comments/1hgxvsj/ai_in_the_workplace_how_have_chemistsscientists/

    top response:

    It’s a good trick to be instantly dismissed. No, really, that’s the latest I had in terms of company policy. If you’re caught using AI for anything, you’re out the door. It’s a lawsuit waiting to happen (and a lawsuit we cannot defend against). Gross misconduct, not eligible for rehire, and all that. Same as intentionally misrepresenting data (because it is). (Pharma)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 days ago

      From the replies:

      In cGMP and cGLP you have to be able to document EVERYTHING. If someone, somewhere messes up the company and authorities theoretically should be able to trace it back to that incident. Generative AI is more-or-less a black box by comparison; plus how often it’s confidently incorrect is well known and well documented. To use it in a pharmaceutical industry would be teetering on gross negligence and asking for trouble.

      Also suppose that you use it in such a way that it helps your company profit immensely and—uh oh! The data it used was the patented IP of a competitor! How would your company legally defend itself? Normally it would use the documentation trail to prove that they were not infringing on the other company’s IP, but you don’t have that here. What if someone gets hurt? Do you really want to make the case that you just gave Chatgpt a list of results and it gave a recommended dosage for your drug? Probably not. When validating SOPs are they going to include listening to Chatgpt in it? If you do, then you need to make sure that OpenAI has their program to the same documentation standards and certifications that you have, and I don’t think they want to tangle with the FDA at the moment.

      There’s just so, SO many things that can go wrong using AI casually in a GMP environment that end with your company getting sued and humiliated.

      And a good sneer:

      With a few years and a couple billion dollars of investment, it’ll be unreliable much faster.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 days ago

        for anyone wondering cgmp/cglp means current good manufacturing/laboratory practices and it’s mostly a set of paperwork concerning audits etc and repeatability of everything

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          5 days ago

          Im assume a few of these good practices have been discovered after a certain price in blood was paid.

          • skillissuer@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 days ago

            everything has to be validated, certified, calibrated, written down and accessible for audit, on top of, you know, actual physical side of good manufacturing like keeping everything clean and in spec. some of that is to control for random fuckups and some is for cover-your-ass purposes. but yeah, good couple thousand people died before it became an actual globally enforced thing

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Days since last comparison of Chat-GPT to shitty university student: zero

      More broadly I think it makes more sense to view LLMs as an advanced rubber ducking tool - like a broadly knowledgeable undergrad you can bounce ideas off to help refine your thinking, but whom you should always fact check because they can often be confidently wrong.

      Seriously why does everyone like this analogy?

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        As a person whose job has involved teaching undergrads, I can say that the ones who are honestly puzzled are helpful, but the ones who are confidently wrong are exasperating for the teacher and bad for their classmates.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        6 days ago

        good question, i have no clue especially that i wasn’t like this as undergrad, it’s really not hard to say “i don’t know, boss” or “more experimental data is needed” and chatgpt will never say this

        shitty undergrad won’t probably leak confidential info either (maybe on sender side, but never on receiver side, as in receiving unexplained stolen confidential info from cosmic noise)

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      AI could be a viable test for bullshit jobs as described by Graeber. If the disinfotmatron can effectively do your job then doing it well clearly doesn’t matter to anyone.