Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Previous week

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 hours ago

    Idea: a programming language that controls how many times a for loop cycles by the number of times a letter appears in a given word, e.g., “for each b in blueberry”.

  • Alex@lemmy.vg
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    Not a sneer but a question: Do we have any good idea on what the actual cost of running AI video generators are? They’re among the worst internet polluters out there, in my opinion, and I’d love it if they’re too expensive to use post-bubble but I’m worried they’re cheaper than you’d think.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 hours ago

      I know like half the facts I would need to estimate it… if you know the GPU vRAM required for the video generation, and how long it takes, then assuming no latency, you could get a ballpark number looking at nVida GPU specs on power usage. For instance, if a short clip of video generation needs 90 GB VRAM, then maybe they are using an RTX 6000 Pro… https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ , take the amount of time it takes in off hours which shouldn’t have a queue time… and you can guessestimate a number of Watt hours? Like if it takes 20 minutes to generate, then at 300-600 watts of power usage that would be 100-200 watt hours. I can find an estimate of $.33 per kWh (https://www.energysage.com/local-data/electricity-cost/ca/san-francisco-county/san-francisco/ ), so it would only be costing $.03 to $.06.

      IDK how much GPU-time you actually need though, I’m just wildly guessing. Like if they use many server grade GPUs in parallel, that would multiply the cost up even if it only takes them minutes per video generation.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        This does leave out the constant cost (per video generated) of training the model itself right. Which pro genAI people would say you only have to do once, but we know everything online gets scraped repeatedly now so there will be constant retraining. (I am mixing video with text here so, lot of big unknowns).

  • mirrorwitch@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    22 hours ago

    I’ve often called slop “signal-shaped noise”. I think the damage already done by slop pissed all over the reservoirs of knowledge, art and culture is irreversible and long-lasting. This is the only thing generative “AI” is good at, making spam that’s hard to detect.

    It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email; no more and no less. I remember how it was a small revolution, in the arms race against spammers, when statistical methods came up; everywhere we took of the load of straining SpamAssassin with rspamd (in the years before gmail devoured us all). I would argue “A Plan for Spam” launched Paul Graham’s notoriety, much more than the Lisp web stores he was so proud of. Filtering emails by keywords was not being enough, and now you could train your computer to gradually recognise emails that looked off, for whatever definition of “off” worked for your specific inbox.

    Now we have the richest people building the most expensive, energy-intensive superclusters to use the same statistical methods the other way around, to generate spam that looks like not-spam, and is therefore immune to all filtering strategies we had developed. That same blob-like malleability of spam filters makes the new spam generators able to fit their output to whatever niche they want to pollute; the noise can be shaped like any signal.

    I wonder what PG is saying about gen-“AI” these days? let’s check:

    “AI is the exact opposite of a solution in search of a problem,” he wrote on X. “It’s the solution to far more problems than its developers even knew existed … AI is turning out to be the missing piece in a large number of important, almost-completed puzzles.”
    He shared no examples, but […]

    Who would have thought that A Plan for Spam was, all along, a plan for spam.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      22 hours ago

      It occurs to me that one way to frame this technology is as a precise inversion of Bayesian spam filters for email.

      This is a really good observation, and while I had lowkey noticed it (one of those feeling things), I never had verbalized it in anyway. Good point imho. Also in how it bypasses and wrecks the old anti-spam protections. It represents a fundamental flipping of sides of the tech industry. While before they were anti-spam it is now pro-spam. A big betrayal of consumers/users/humanity.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      22 hours ago

      Signal shaped noise reminds me of a wiener filter.

      Aside: when I took my signals processing course, the professor kept drawing diagrams that were eerily phallic. Those were the most memorable parts of the course

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    22 hours ago

    The beautiful process of dialectics has taken place on the butterfly site, and we have reached a breakthrough in moral philosophy. Only a few more questions remain before we can finally declare ethics a solved problem. The most important among them is, when an omnipotent and omnibenevolent basilisk simulates Roko Mijic getting kicked in a nuts eternally by a girl with blue hair and piercings, would the girl be barefoot or wearing heavy, steel-toed boots? Which kind of footwear of lack thereof would optimize the utility generated?

    • nfultz@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      15 hours ago

      In a similar train of thought:

      A.I. as normal technology (derogatory) | Max Read

      But speaking descriptively, as a matter of long precedent, what could be more normal, in Silicon Valley, than people weeping on a message board because a UX change has transformed the valence of their addiction?

      I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        14 hours ago

        I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?

        In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can’t run Doom).

        In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.

        • nfultz@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          13 hours ago

          In April 2021, AI Dungeon implemented a new algorithm for content moderation to prevent instances of text-based simulated child pornography created by users. The moderation process involved a human moderator reading through private stories.[49][41][50][51] The filter frequently flagged false positives due to wording (terms like “eight-year-old laptop” misinterpreted as the age of a child), affecting both pornographic and non-pornographic stories. Controversy and review bombing of AI Dungeon occurred as a result of the moderation system, citing false positives and a lack of communication between Latitude and its user base following the change.[40]

          Haha. Good find.

          • bitofhope@awful.systems
            link
            fedilink
            English
            arrow-up
            5
            ·
            12 hours ago

            Ooh, what a terrible fate! What horrid crimes you must have committed to make our beloved jannies punish you with admin bits! :D

          • self@awful.systems
            link
            fedilink
            English
            arrow-up
            7
            ·
            12 hours ago

            Intellectual (Non practicing, Lapsed)

            indeed

            not saying it’s always the supposed infosec instances, but

          • cy@fedicy.us.to
            link
            fedilink
            arrow-up
            4
            ·
            12 hours ago

            Wulfy… saying someone cannot be right because they haven’t agreed with you yet is an appeal to authority. People might be wrong, but they don’t have to adopt AI in order to have an informed opinion.

            If you’re asking me how to design a prompt for a particular AI, then I don’t know a single thing about it. If you’re asking me whether AI is a good idea or not, I can be more sure of that answer. Feel free to prove me wrong, but don’t say my opinion doesn’t matter.

            Have you seen the data centers being built just north of your house? No? Well it doesn’t matter you still might have a point!

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.

    Me, two months ago

    Looks like I was on the money - Reddit’s began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit’s pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit’s given them a pretty solid excuse to use.

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    Good news everyone! Someone with a SlackSlub has started a series countering the TESCREAL narrative.

    He (c’mon, it’s a guy) calls it “R9PRESENTATIONALism”

    It stands for

    • Relational
    • 9P
      • Postcritical
      • Personalist
      • Praxeological
      • Psychoanalytic
      • Participatory
      • Performative
      • Particularist
      • Poeticist
      • Positive/Affirmationist
    • Reparative
    • Existentialist
    • Standpoint-theorist
    • Embodied
    • Narrativistic
    • Therapeutic
    • Intersectional
    • Orate
    • Neosubstantivist
    • Activist
    • Localist

    I see no reason why this catchy summary won’t take off!

    https://www.lesswrong.com/posts/RCDEFhCLcifogLwEm/exploring-the-anti-tescreal-ideology-and-the-roots-of-anti

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 hours ago

      EOk, I know I said I dont like TESCREAL as a term (too much groups under one banner, feels like how everybody on the left of the right gets called a communist/liberal, and it just isnt catchy as a term, easy to misuse) butt this has turned me around. If they write articles like this and show their whole ass im all for it.

      Im sure Ottokar asked chatgpt for advice on this and it told him how much of a great writer he is and how much he is on to something.

      (Or this new user on LW is just trolling and 22 upvoters fell for it).

      a four-centuries-long counterrevolution within the arts to defend the validity of charismatic authority

      If this gets a followup please make it a separate posts. I see soo many potential sneers. Also wonder of we can eventually bring up Godel (drink) in re to his claims about science and objectivity.

      (Also as they are being pro science and anti-charismatic authority, are they going to get rid of Yud and Scott? (im obv joking here, I know they them describing us as being pro charisma/anti science/anti objectivity does not make them automatically pro that)).

      E: another reason why these kinds of meta level discussions are silly, they are leaving out the big elephants in the room. The elephants called, sexism, racism, scientific racism, anti-lgbt stuff, the fellating of billionaires, the constant creation of new binary ideas which they say are not intended to be hierarchical but clearly meta level is better than object level, soldiers claiming they have a scout mindset, etc.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      17 hours ago

      […] it actually has surprisingly little to do with any of the intellectual lineages that its proponents claim to subscribe to (Marxism, poststructuralism, feminism, conflict studies, etc.) but is a shockingly pervasive influence across modern culture to a greater degree than even most people who complain about it realize.

      I mean, when describing TESCREAL Torres never had to argue that it’s adherents were lying or incorrect about their own ideas. It seems like whenever someone tries this kind of backlash they always have to add in a whole mess of additional layers that are somehow tied to what their interlocutors really believe.

      I’m reminded, ironically, of Scott’s (imo very strong) argument against the NRx category of “demotist” states. It’s fundamentally dishonest to create a category that ties together both the innocuous or positive things your opponents actually believe and some obnoxious and terrible stuff, and then claim that the same criticisms apply to all of them.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      I have a better counter narrative:

      • Consequentialism
      • Universalism
      • Meta-analytical
      • Singularitarianism
      • Heuristicationalism
      • Autodidacticalisticalistalism
      • Retro-regresso-revisionism
      • Transhumanisticiousnessness
      • Exo-galactic-civilisationalismnisticalism
      • Rationalist

      Can’t think of a good acronym though, but it’s a start

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago
        • Accelerationism
        • Consequentialism
        • Conservatism
        • Orthodoxy
        • Rationalism
        • Disestablishmentarianism
        • Intellectualism
        • Natalism
        • Galileianism
        • Transhumanism
        • Outside the box thinking
        • Anti-empiricism
        • Laissez-faire
        • LaVeyan Satanism
        • Kantian deontology
        • Nationalism
        • Orgasm denial
        • Western chauvinism
        • Neo-Aristotelianism
        • Longtermism
        • Altruism
        • White supremacy
        • Sinophobia
        • Orientalism…
    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      god, the comments got heavily raided by various types of lazy TESCREAL:

      • how dare you doom all future generations to dying by pointing out that immortality under capitalism would be a living hell. you monster.
      • sure but life extension technology is real and on the horizon isn’t it? and then I can become functionally immortal! (no and shut up)
      • somehow, it’s bad optics to point out that rich people chasing immortality is fucking things up for everyone else

      and not only did none of these fuckers get the point, they’re also making points that aren’t at all common outside of TESCREAL circles? like, no normal person I know naturally slips into the “but think of the Bayesian children” modality of thought.

      is this just how Blue Sky is? I don’t browse it much outside of David’s threads.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        10 minutes ago

        is this just how Blue Sky is? I don’t browse it much outside of David’s threads.

        Not in my exp. Standard is also to just block annoying people asap so they don’t show up in your replies/other peoples feeds. The blocking function is very strong on bsky. (I do worry that the more ‘influencer’ types (or people who just don’t care) will not block annoying people because it drives more views to their content, so that is why you would find more of those comments under something from Evans than a random poster).

        Lot of people also have the ‘do not show things to people not logged in’ feature turned on.

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        Huh, I gotta scroll down all the way to hell to see these comments. I really really don’t feel like I should have to defend this stupid platform that seems specifically tailored to kill decentralized alternatives, but so far I’ve seen mostly healthy disdain for various fascist bullshit, including our Very Good Friends.

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          these were all 3-10 comments from the OP for my sort, but I don’t have a bluesky account so not being logged in might influence how I’m seeing the thread

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      I’m a little surprised there hasn’t been more direct interaction between my “watching the far-right like heavily armed chimpanzees in a zoo” podcast circles and our techtakes sneerspace. Zitron’s work on Better Offline is great, obviously, but I’ve been listening through QAA, for example, and their discussions of AI and its implications could probably benefit from a better technical grounding.

      You love to see it, though.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 minutes ago

        Friend of mine was surprised I had never heard of some popular ‘right of repair’ guy who now also went anti genAI, as he thought I would have heard of him because it was a lot of overlapping circles.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 days ago

    Yall ready for another round of LessWrong edit wars on Wikipedia? This time with a wider list of topics!

    https://www.lesswrong.com/posts/g6rpo6hshodRaaZF3/mech-interp-wiki-page-and-why-you-should-edit-wikipedia-1

    On the very slightly merciful upside… the lesswronger recommends “If you want to work on a new page, discuss with the community first by going to the talk page of a related topic or meta-page.” and “In general, you shouldn’t post before you understand Wikipedia rules, norms, and guidelines.” so they are ahead of the previous calls made on Lesswrong for Wikipedia edit-wars.

    On the downside, they’ve got a laundry list of lesswrong jargon they want Wikipedia articles for. Even one of the lesswrongers responding to them points out these terms are a bit on the under-defined side:

    Speaking as a self-identified agent foundations researcher, I don’t think agent foundations can be said to exist yet. It’s more of an aspiration than a field. If someone wrote a wikipedia page for it, it would just be that person’s opinion on what agent foundations should look like.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      PS: We also think that there existing a wiki page for the field that one is working in increases one’s credibility to outsiders - i.e. if you tell someone that you’re working in AI Control, and the only pages linked are from LessWrong and Arxiv, this might not be a good look.

      Aha so OP is just hoping no one will bother reading the sources listed on the article…

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 hours ago

        I could imagine a lesswronger being delusional/optimistic enough to assume their lesswrong jargon concepts have more academic citations than a handful of arXiv preprints… but in this case they just admitted otherwise their only sources are lesswrong and arXiv. Also, if they know wikipedia’s policies, they should no the No Original Research rule would block their idea even overlooking single source and conflict of interest.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 day ago

      From the comments:

      On the contrary, I think that almost all people and institutions that don’t currently have a Wikipedia article should not want one.

      Huh. How oddly sensible.

      An extreme (and close-to-home) example is documented in TracingWoodgrains’s exposé.of David Gerard’s Wikipedia smear campaign against LessWrong and related topics.

      Ah, never mind.

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    If I ever get the urge to start a website for creatives to sell their media, please slap me in the face and remind me it will absolutely not be worth it.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      22 hours ago

      choice quote from Elsevier’s response:

      Q. Have authors consented to these hyperlinks in their scientific articles?
      Yes, it is included on the signed agreement between the author and Elsevier.

      Q. If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent?
      Yes, because you will need to sign an agreement with Elsevier.

      consent, everyone!

  • froztbyte@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    names for genai people I know of so far: promptfans, promptfondlers, sloppers, autoplagues, and botlickers

    any others out there?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 hours ago

      promptfarmers, for the “researchers” trying to grow bigger and bigger models.

      /r/singularity redditors that have gotten fed up with Sam Altman’s bs often use Scam Altman.

      I’ve seen some name calling using drug analogies: model pushers, prompt pushers, just one more training run bro (for the researchers); just one more prompt (for the users), etc.

    • Seminar2250@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      clanker

      edit: this may be used to refer to the chatbots themselves, rather than those who fondle chatbots

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    2 days ago

    “usecase” is a cursed term. It’s an inverted fnord that lets the reader know that whatever follows can be safely ignored.

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      lol, lmao: as if any cloud service had any intention at all of actually deleting data instead of tombstoning it for arbitrary lengths of time. (And that’s the least stupid factor in this whole scheme; is this satire? Nobody seems to be able to tell me)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 days ago

      It gets worse, as the advisory doesn’t even mention to delete emails/pictures from the cloud, so the people who are likely to listen to these kinds of advices are also the people who are the least likely to understand why this is a bad idea and will delete their local stuff. (And that is ignoring that opening your email/gallery to delete stuff costs more than keeping it in storage where it isn’t accessed).

      https://www.gov.uk/government/news/national-drought-group-meets-to-address-nationally-significant-water-shortfall

      "HOW TO SAVE WATER AT HOME

      • Install a rain butt [hehehe] to collect rainwater to use in the garden.
        … [other advice removed]
      • Delete old emails and pictures as data centres require vast amounts of water to cool their systems."
    • BurgersMcSlopshot@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Every email you don’t delete is another dead fish, or another pasture unwatered. That promotional offer sent to your inbox that you ignored but did not dispose of means creeks will run dry. That evite for a party thrown by an acquaintance you don’t particularly like that you did not drop into the trash means a marathon runner will go thirsty as the nectar of life so required is absent, consumed instead by the result of your inbox neglect.