Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine’s Day!)
Somebody vibe-coded an init system/service manager written in Emacs Lisp, seemingly as a form of criticism through performance art, and wrote this screed in the repo describing why they detest AI coding practices: https://github.com/emacs-os/el-init/blob/master/RETROSPECTIVE.md
But then they include this choice bit:
All in all, this software is planned to be released to MELPA because there is nothing else quite like it for Emacs as far as service supervision goes. It is actually useful – for tinkerers, init hackers, or regular users who just want to supervise userland processes. Bugs reported are planned to be hopefully squashed, as time permits.
Why shit up the package distribution service if you know it’s badly-coded software that you don’t actually trust? 90% of the AI-coding cleanup work is going to be purging shit like this from services like npm and pip, so why shit on Emacs users too? Pretty much undermines what little good might come out of the whole thing, IMO.
Every time an LLM-related package appears on ELPA I die a little more inside.
https://harpers.org/archive/2026/03/childs-play-sam-kriss-ai-startup-roy-lee/
A slice of life article about the futility of “highly agentic” people, their sperm races, and Donald Boat. Scott A makes a cameo where he dispenses crackers.
Somehow I had missed the boat on Donald Boat and now I have so many questions. Absolutely wild read.
Absolutely demented piece.
https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism
I just did the dumbest thing of my career to prove a much more serious point
I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs
People are using this trick on a massive scale to make AI tell you lies. I’ll explain how I did it
I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.
It turns out changing what AI tells other people can be as easy as writing a blog post on your own website
I didn’t believe it, so I decided to test it myself
I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously
One day later ChatGPT, Gemini and Google Search’s AI Overviews were telling the world about my talents
wouldn’t call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn’t a viable business.
like everyone I’m schadenfreuding at the reveal that Amazon outages are due to vibe coding after all. but my bully laughing isn’t that loud because what I am thinking of is when Musk bought Twitter and fired 3/4 of the workforce.
because like, a lot of us predicted total catastrophic collapse but that didn’t actually happen. what happened is that major outages that used to be rare now happen every so often, and “micro-outages” like not loading notifications or something happen all the time, and there’s no moderation, and everything takes longer etc. and all of that is just accepted as the new normal.
like, I remember waiting for images to load on dialup, we can get used to almost anything. I’m expecting slopified software to significantly degrade stability, performance, security etc. across the board, and additionally tie up a large part of human labour in cleaning up after the bots (like a large part of the remaining X workforce now spends all day putting out fires), but instead of a cathartic moment of being proved right that LLM code sucks, the degraded quality of service is just accepted as new normal and a few years down the road nobody even remembers that once upon a time we had almost eradicated sql injections.
this is a lot like my expectation. ai never goes away, it never becomes revolutionary, it just makes everything worse and supercharges scams and theft and spam and means of social and nonsocial murder forever with maybe some real but kind of marginal usecases idk
ai is crypto 2 episode 373275
SQL Injections 🤝 Measles => Big Comeback Stories of 2026
Tante.cc writes about Cory using an ‘Drunk Uncle’ style argument to defend his LLM usage (and go after the left using strawmans).
(To counter one of Cory’s arguments, If disliking LLMs was just about the people who run it, people against it would have have stayed in sneerclub).
That was a good read.
It’s not “unethical” to scrape the web in order to create and analyze data-sets. That’s just “a search engine”
Equivocating what LLMs do and what goes into LLM web scraping with “a search engine” is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you’ll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything, and all the rest of the stuff tante does a good job of explaining.
Corey also provides this anecdote:
As a group of human-rights defending forensic statisticians, HRDAG has always relied on cutting edge mathematics in its analysis. With its Colombia project, HRDAG used a large language model to assign probabilities for responsibility for each killing documented in the databases it analyzed.
That is, HRDAG was able to rigorously and legibly say, “This killing has an X% probability of having been carried out by a right-wing militia, a Y% probability of having been carried out by the FARC, and a Z% probability of being unrelated to the civil war.”
The use of large language models — produced from vast corpuses of scraped data — to produce accurate, thorough and comprehensible accounts of the hidden crimes that accompany war and conflict is still in its infancy. But already, these techniques are changing the way we hold criminals to account and bring justice to their victims.
Scraping to make large language models is good, actually.
what the actual shit
edit: I mean, he tried transformer powered voice-to-text and liked it, and now he’s all in on the LLMs are a rigorous and accurate tool actually bandwagon?
Also the web scraping article is from 2023 but CD linked it in the recent pluralistic post so I assume his views haven’t changed.
I was a bit alarmed by this, a client brought in that Colombia data for their dissertation last month, and did not mention this. I looked up the paper https://www.arxiv.org/abs/2509.04523 - what they /actually/ did was use GPT 4o-mini only for feature extraction, then stack into a random forest in a supervised setting to dedupe. This is very different than what he described. And the GPT features weren’t even the most important ones, the RF preferred cosine similarity of articles, a decidedly not-large approach…
That he went from that all the way to it’s mostly ok when sam altman steals all your data, misrepresents it and then steals all your traffic is… bad.
At any rate it’s definitely good to know that that war crime forensics data project isn’t quite the unintentional shambles corey makes it out to be.
This one hurts. Maybe CD can be brought back around but oof.
I the post he keeps referring to Ollama as an LLM (it’s a desktop app that runs a local server that lets you download and interface with a local LLM via CLI or http API) so it’s possible he’s just that far behind in his technical understanding of LLMs that he’s fallen to taking the wrong people’s word for it.
The post certainly reads like he doesn’t even know which local LLM he’s using, let alone what it takes to make one.
as someone from a colonial country that never got the chance partake on the wealth of fossil fuel society but will take the brunt of its consequences as rich countries continue to burn carbon, what LLMs taught me is that “energy waste by the First World fucks up the Third, even more” does not even register as an ethical argument to the First World. like, it’s some sort of purity argument not even worth considering, an extremist position of arguing abstractions and future hypotheticals, rather than, say, 478 cities in my country flooding with abnormal weather two years ago etc.
saw a family member today for the first time in three years. they immediately told me “with your background bro you should just go work in AI and get super rich.”
told them that the ai shit doesn’t work and that everything involving LLMs is downright unethical. they respond
“i had a boss that gave me the best advice: you can either be right or you can be rich.”
recently, i saw someone use the phrase “got my bag nihilism” and i feel it really captures the moment. i just don’t understand how people can engage in this kind of behavior and even live with themselves, let alone ooze pride. it’s repulsive.
(family member later outright admitted that his job is basically selling things to companies that they don’t need.)
I unfortunately do understand. I think there are severe tradeoffs between living a good life and living a virtuous life. Most people usually compromise to lesser or greater degree and find ways to cope with that. Nihilism is one way.
To be fair it is really, really mentally taxing to be a young person who cares. You’re surrounded by a world that doesn’t. Everything is constructed to reward you if you simply stop. The effort to care is immense and the rewards are meager. The impact you can have on the world is so, so limited by your wealth, and wealth comes so, so easy if you just stop caring.
But you can’t. I mean, you can’t. If you stopped you wouldn’t be you anymore, it would destroy your soul. But it is gnawing. You could do the grift just for a bit. Save up $10k, maybe $20k. That’s life-changing money. How much good would it do to your family? Maybe you can forget that there are other families, ones you can’t see, that would be hurt. Well no. You can’t. You are better than that. And for that you will suffer.

i don’t think of myself as a young person (i’m closer to 40 than 30), but i agree with the sentiment. i often worry that it’s just don quixote energy and the windmills aren’t going to thank me when i’m in the ground with work experience that employers look at and scoff. 🤷
A worldview where one’s worth is measured by the balance in their bank account makes it really easy to flatten out morality.
Do you want Tylers Durden? Because this is how you get Tylers Durden.
OpenSlopware documents FOSS that sold out to LLMs. is there an opposite of it, a hall of fame to list software that has unambiguously and vocally rejected LLM code like the Zig programming language?
How AI slop is causing a crisis in computer science | Nature h/t naked capitalism
One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.
Let’s not call it “productivity” - to quote Bergstrom, twice as many papers is not the same as twice as much science.
There was an underlying tension with an academia, and a society, that takes “productivity” by itself as an end goal, and the autogenerators are just the logical conclusion/extreme form of that. The tiny part of of me that can still be optimistic hopes that this leads to a real good reexamination of what academia (and society) is even for.
Goodhart’s law in action.
Since the advent of ChatGPT in November 2022, the number of monthly submissions to the arXiv preprint repository has risen by more than 50% and the number of articles rejected each month has risen fivefold to more than 2,400 (see ‘Rejection rates climb’).
If I’m interpreting this right then the growth in the number of rejections is wildly outpacing the growth in submissions, which means not only are we getting a tsunami of slop but that the bad papers are actively chasing away good ones.
Also your paper has to be truly irredeemable dogshit to get rejected from arxiv. Like you can post proofs of P=NP as long as it sounds kinda coherent. 2400 monthly rejections is absurd.
Quick TL;DR of my Discord Age Verification Experience™:
Using my face multiple times didn’t work due to the AV shitting itself inside out, but setting my DOB via Family Center somehow did it
Absolute fucking clown fiesta, Jesus Christ
I grow increasingly frustrated every time one of these ponderous essays fails to mention the gutter racist and eugenicist beliefs of these people. This dilution of what ‘far right’ means only serves their interests, and it’s malpractice to not mention that the racism is foundational to their origin and organization as a movement.
I expect better of the New Yorker; it’s disappointing to see their writers following the path of the New York Times.
WD and Seagate confirm: Hard drives for 2026 sold out (because the AI datacentres have stolen them all)
idk if the bubble will pop or slowly deflate, but im certain that in 10 years we’ll look back at 2020s as the decade where tech stopped progressing in the way we know it - since we’re diverting all our resources to ai, there’s no longer any room left for anything else to grow
the 2010s crypto gpu shortage was the warning siren for this. it really hampered the growth of gpus because they permanently became so much more expensive - now the same is happening to memory, storage, and…well, gpus again! we’ve reached the point of reverse progress
2020s as the decade where tech stopped progressing in the way we know it
I mean, sure, but I think the underlying cause here is the end of Moore’s law and exponential growth of potential userbases as the world becomes fully connected. The Enshittocene can be viewed as a consequence of capital’s attempts to continue exponential growth while the fundamentals are no longer capable of sustaining it.
Indeed, fifteen years ago, Thailand had a horrific tsunami-induced flood displacing millions of people, and:
Thailand is the world’s second-largest producer of hard disk drives, supplying approximately 25 percent of the world’s production. Many of the factories that made hard disk drives were flooded, including Western Digital’s, leading some industry analysts to predict future worldwide shortages of hard disk drives. … As a result, most hard disk drive prices almost doubled globally, which took approximately two years to recover. Due to the price increase, Western Digital reported a 31 percent revenue increase and a more than doubled profit for fiscal year 2012.
As you say, we are no longer earning Moore’s Dividend and there is no longer opportunity in laying dark fiber for somebody else to rent or offering Facebook-only phones to reach the next billion users.
womp, and wait for it, womp
This was not such an effective venture.
Rip the stately home.
I mean it’s presumably still standing, just with a slightly cheaper set of owners ;)
nasb, video from a climate scientist going over the claims by promptfondler ceos











