Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    5
    ·
    12 hours ago

    New opinion piece from the Guardian: AI is ‘beating’ humans at empathy and creativity. But these games are rigged

    The piece is one lengthy sneer aimed at tests trying to prove humanlike qualities in AI, with a passage at the end publicly skewering techno-optimism:

    Techno-optimism is more accurately described as “human pessimism” when it assumes that the quality of our character is easily reducible to code. We can acknowledge AI as a technical achievement without mistaking its narrow abilities for the richer qualities we treasure in each other.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 hours ago

      I feel like there’s both an underlying value judgement underlying the way these studies are designed that leads to yet another example of AI experiments spitting out the exact result they were told to. This was most obvious in the second experiment described in the article about generating ideas for research. The fact that both AI and human respondents had to fit a format to hide stylistic tells suggests that those tells don’t matter. Similarly these experiments are designed around the assumption that reddit posts are a meaningful illustration of empathy and that there’s no value in actually sharing space and attention with another person. While I’m sure they would phrase it as trying to control for extraneous factors (i.e. to make sure that the only difference perceivable is in the level of empathy), this presupposes that style, affect, mode of communication, etc. don’t actually have any value in showing empathy, creativity, or whatever, which is blatantly absurd to anyone who has actually interacted with a human person.