Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance…I guess.)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    29 days ago

    LWronger posts article entitled

    “Authors Have a Responsibility to Communicate Clearly”

    OK, title case, obviously serious.

    The context for this essay is serious, high-stakes communication: papers, technical blog posts, and tweet threads.

    Nope they’re going for satire.

    And ladies, he’s available!

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      28 days ago

      I eas slightly saddened to scroll over his dating profile and see almost every seemed to be related to AI even his other activities. Also not sure how well a reference to a chad meme will make you do in the current dating in SV.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        28 days ago

        Bruh, there’s a part where he laments that he had a hard time getting into meditation because he was paranoid that it was a form of wire heading. Beyond parody. The whole profile is 🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩🚩

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        28 days ago

        Maybe it’s to hammer home the idea that time before DOOM is limited and you might as well get your rocks off with him before that happens.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    Bummer, I wasn’t on the invite list to the hottest SF wedding of 2025.

    Update your mental models of Claude lads.

    Because if the wife stuff isn’t true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon???

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    New thread from Baldur Bjarnason publicly sneering at his fellow programmers:

    Anybody who has been around programmers for more than five minutes should not be surprised that many of them are enthusiastically adopting a tool that is harmful, destroying industries, sabotaging education, and hindering the energy transition because they feel it’s giving them a moderate advantage

    That they respond to those pointing some of this out with mockery (“nuts”, “shove your concern up your ass”) and that their peers see this mockery as reasonable discourse is also not surprising. Tech is entirely built on the backs of workers with no regard for externalities or second order effects

    Tech is also extremely bad at software. We habitually make fragile, insecure, complex, and hard to maintain code that backs poor UIs. The best case scenario is that LLMs accelerate already broken software dev processes in an industry that is built around monopolies and billionaire extremists

    But, sure, feeling discouraged by the state of the industry is “like quitting carpentry as a career thanks to the invention of the table saw”

    Whatever

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      29 days ago

      This ties back into the recurring question of drawing boundaries around “AI” as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI “cycles” deep where every 30 years or so (whenever it stops being “retro”) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.

      This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    26 days ago

    AI research is going great. Researchers leave instructions in their papers to any LLM giving a review, telling them to only talk about the positives. These instructions are hidden using white text or a very small font. The point is that this exploits any human reviewer who decides to punt their job to ChatGPT.

    My personal opinion is that ML research has become an extreme form of the publish or perish game. The most prestigious conference in ML (NeurIPS) accepted a whopping 4497 papers in 2024. But this is still very competitive, considering there were over 17000 submissions that year. The game for most ML researchers is to get as many publications as possible in these prestigious conferences in order to snag a high paying industry job.

    Normally, you’d expect the process of reviewing a scientific paper to be careful, with editors assigning papers to people who are the most qualified to review them. However, with ML being such a swollen field, this isn’t really practical. Instead, anyone who submits a paper is also required to review other people’s submissions. You can imagine the conflicts of interest that can occur (and lazy reviewers who just make ChatGPT do it).

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with “uh huh” or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    An interesting takedown of “superforecasting” from Ben Recht, a 3 part series on his substack where he accuses so called super forecasters of abusing scoring rewards over actually being precogs. First (and least technical) part linked below…

    https://www.argmin.net/p/in-defense-of-defensive-forecasting

    "The term Defensive Forecasting was coined by Vladimir Vovk, Akimichi Takemura, and Glenn Shafer in a brilliant 2005 paper, crystallizing a general view of decision making that dates back to Abraham Wald. Wald envisions decision making as a game. The two players are the decision maker and Nature, who are in a heated duel. The decision maker wants to choose actions that yield good outcomes no matter what the adversarial Nature chooses to do. Forecasting is a simplified version of this game, where the decisions made have no particular impact and the goal is simply to guess which move Nature will play. Importantly, the forecaster’s goal is not to never be wrong, but instead to be less wrong than everyone else.*

    *Yes, I see what I did there."

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

    Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book “I am God” when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn’t argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka fucking wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worst forecaster than the big brain rationalist. To be clear Habryka’s objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

    https://xcancel.com/ohabryka/status/1939017731799687518#m

    But what really drove me crazy like a drill to the brain the LW rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? Like OAI and Anthropic do not make a profit full stop. Like they are setting money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K’s predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can’t see in any earnings report is the official party line now?

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I wouldn’t argue with someone who said reasoning models are a substantial advance

      Oh, I would.

      I’ve seen people say stuff like “you can’t disagree the models have rapidly advanced” and I’m just like yes I can, here: no they didn’t. If you’re claiming they advanced in any way please show me a metric by which you’re judging it. Are they cheaper? Are they more efficient? Are they able to actually do anything? I want data, I want a chart, I want a proper experiment where the model didn’t have access to the test data when it was being trained and I want that published in a reputable venue. If the advances are so substantial you should be able to give me like five papers that contain this stuff. Absent that I cannot help but think that the claim here is “it vibes better”.

      If they’re an AGI believer then the bar is even higher, since in their dictionary an advancement would mean the models getting closer to AGI, at which point I’d be fucked to see the metric by which they describe the distance of their current favourite model to AGI. They can’t even properly define the latter in computer-scientific terms, only vibes.

      I advocate for a strict approach, like physicist dismissing any claim containing “quantum” but no maths, I will immediately dismiss any AI claims if you can’t describe the metric you used to evaluate the model and isolate the changes between the old and new version to evaluate their efficacy. You know, the bog-standard shit you always put in any CS systems Experimental section.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        To be clear, I strongly disagree with the claim. I haven’t seen any evidence that “reasoning” models actually address any of the core blocking issues- especially reliably working within a given set of constraints/being dependable enough to perform symbolic algorithms/or any serious solution to confabulations. I’m just not going to waste my time with curve pointers who want to die on the hill of NeW sCaLiNG pArAdIgM. They are just too deep in the kool-aid at this point.

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          29 days ago

          I’m just not going to waste my time with curve pointers who want to die on the hill of NeW sCaLiNG pArAdIgM. They are just too deep in the kool-aid at this point.

          The singularity is near worn-out at this point.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It’s kind of a shame to have to downgrade Gary to “not wrong, but kind of a dick” here. Especially because his sneer game as shown at the end there is actually not half bad.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you’re right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn’t be misinterpreted as general skepticism towards all AI… in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it’s just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

      Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

      Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he’ll even start to be “charitable” to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I’m not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won’t actually learn much from him.

  • nfultz@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Aella popped up on doomscroll - https://youtu.be/r7WL6kaTJnw

    E: oh man the comments are great

    E2:

    1:08:02 There’s a lot of discussions among the rationalist community about the uneven distribution of IQ and its correlation with race. Why is this a topic that people fixate on if they’re also convinced that this ultra intelligence an AGI that’s like smarter than every human on the planet why are these marginal differences so important to people?

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Highlights from the comments: @wjpmitchell3 writes,

      Actual psychology researcher: the problem with IQ is A) We don’t really know what it’s measuring, B.) We don’t really know how it’s useful, C.) We don’t really know how context-specific it is, D.) When people make arguments about IQ, it’s often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it’s a proxy measure of and we don’t have good evidence yet to say “This is a reliable and broadly-encompassing representation of intelligence.” or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we’re still trying to understand. It’s dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we’re entering sensitive ethical spaces, which is something that rationalist don’t do well in because their utilitarian calculus has difficulty capturing the intangibles.

      @arnoldkotlyarevsky383 says,

      Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

      @User123456767 observes,

      You can kind of tell she grew up as a Calvinist because she still seems to think she’s part of the elect she’s just replaced an actual big G God with some sort of AI God.

      @jaredsarnie3712 begins,

      I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

      And from @Fruuuuuuuuuck:

      Doomscroll gooner arc

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        One thing I have wondered about. The rats always have that graphic of the IQ of Einstein vs the village idiot being almost imperceptible vs the IQ of the super robo god. If that’s the case, why the hell do we only want our best and brightest doing “alignment research”? The village idiot should be almost just as good!

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    27 days ago

    Get your popcorn folks. Who would win: one unethical developer juggling “employment trial periods”, or the combined interview process of all Y Combinator startups?

    https://news.ycombinator.com/item?id=44448461

    Apparently one indian dude managed to crack the YC startup interview game and has been juggling being employed full time at multiple ones simultaneously for at least a year, getting fired from them as they slowly realize he isn’t producing any code.

    The cope from the hiring interviewers is so thick you could eat it as a dessert. “He was a top 1% in the interview” “He was a 10x”. We didn’t do anything wrong, he was just too good at interviewing and unethical. We got hit by a mastermind, we couldn’t have possibly found what the public is finding quickly.

    I don’t have the time to dig into the threads on X, but even this ask HN thread about it is gold. I’ve got my entertainment for the evening.

    Apparently he was open about being employed at multiple places on his linkedin. I’m seeing someone say in that HN thread that his resume openly lists him hopping between 12 companies in as many months. Apparently his Github is exclusively clearly automated commits/activity.

    Someone needs to run with this one. Please. Great look for the Y Combinator ghouls.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        27 days ago

        I’m not shedding any tears for the companies that failed to do their due dilligence in hiring, especially not ones involved in AI (seems most were) and involved with Y Combinator.

        That said, unless you want to get into a critique of capitalism itself, or start getting into whataboutism regarding celebrity executives like a number of the HN comments do, I don’t have many qualms calling this sort of thing unethical.

        This whole thing is flying way too close to the "not debate club" rule for my comfort already, but I wrote it so I may as well post it

        Multiple jobs at a time, or not giving 100% for your full scheduled hours is an entirely different beast than playing some game of “I’m going to get hired at literally as many places as possible, lie to all of them, not do any actual work at all, and then see how long I can draw a paycheck while doing nothing”.

        Like, get that bag, but ew. It’s a matter of intent and of scale.

        I can’t find anything indicating that the guy actually provided anything of value in exchange for the paychecks. Ostensibly, employment is meant to be a value exchange.

        Most critically for me: I can’t help but hurt some for all the people on teams screwed over by this. I’ve been in too many situations where even getting a single extra pair of hands on a team was a heroic feat. I’ve seen the kind of effects it has on a team tthat’s trying not to drown when the extra bucket to bail out the water is instead just another hole drilled into the bottom of the boat. That sort of situation led directly to my own burnout, which I’m still not completely recovered from nearly half a decade later.

        Call my opinion crab bucketing if you like, but we all live in this capitalist framework, and actions like this have human consequences, not just consequences on the CEO’s yearly bonus.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          27 days ago

          Nah, I feel you. I think this is pretty solidly a “plague on both their houses” kind of situation. I’m glad he chose to focus his apparently amazing grift powers on such a deserving target, but let’s not pretend that anything whatsoever was really gained here.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        26 days ago

        I’m not 100% on the technical term for it, but basically I’m using it to mean: the first couple of months it takes for a new hire to get up to speed to actually be useful. Some employers also have different rules for the first x days of employment, in terms of reduced access to sensitive systems/data or (I’ve heard) giving managers more leeway to just fire someone in the early period instead of needing some justification for HR.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          25 days ago

          Ah ok, I’m aware of what this is, just never heard “work trial” used.

          In my head it sounded like a free demo of how insufferable your new job is going to be

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      27 days ago

      Alongside the “Great Dumbass” theory of history - holding that in most cases the arc of history is driven by the large mass of the people rather than by exceptional individuals, but sometimes someone comes along and fucks everything up in ways that can’t really be accounted for - I think we also need to find some way of explaining just how the keys to the proverbial kingdom got handed over to such utter goddamn rubes.