Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Wew, Cory Doctorow sure is posting through it
https://pluralistic.net/2026/03/12/normal-technology/#bubble-exceptionalism
They’re not vibe-coding mission-critical AWS modules.
and
- It’s worse than that, they’re vibe coding critical operating system components
He knows how LLMs work, right? This really is just cope because he got called out for being weird about using them. Really fucking disappointing
Kind of wild that the guy who popularized “enshittification” as a term will die on the hill that the technology which drives the industrial enshittification of all human media is fine actually, because some people find the plugins useful.
Take “Morgellons Disease,” a psychosomatic belief that you have wires growing in your body, which causes sufferers to pick at their skin to the point of creating suppurating wounds. Morgellons emerged in the 2000s, but the name refers to a 17th-century case-report of a patient who suffered from a similar delusion:
Nitpick but this is unusually sloppy for Doctorow. 1) People with Morgellon’s don’t believe they have wires growing out of sores, but fibres (which upon examination turn out to be cotton for clothes). 2) The original Morgellons is a putative children’s disease «wherein they critically break out with harsh Hairs on their Backs, which takes off the Unquiet Symptomes of the Disease, and delivers them from Coughs and Convulsions.» Which is quite different from the modern condition, whose sufferers have skin sores anywhere in the body with fibrous material looking like lint, dandelion fluff etc., and not particularly associated with convulsions. And 3) The association between the two was made by Miriam Leitao, a mother who believes her son suffers from the disease, and has gone to countless doctors and media trying to prove it’s real. So it’s an attempt to legitimise the postulated disease by cherry-picking something “historical” that vaguely resembles it.
the Pentagon’s CTO has AI psychosis now. sighhhhhhhhh
The whole argument can just be countered with “if the Pentagon believes Claude is sentient and a danger to the military, then why make a deal with OpenAI to use ChatGPT, another LLM similar to Claude? Wouldn’t that also be a danger of becoming sentient? and why are Pete Hegseth and Donald Trump planning to force Anthropic to comply after 6 months if they believe Claude shouldn’t be in the military?? Why did you ask Anthropic to let you use Claude for mass surveillance and autonomous weapons if you believed it was sentient and a danger??”
It just reeks of bullshit. “uhm actually we made Anthropic a supply chain risk because Claude is actually very dangerous and not because we’re doing banana republic shit to anyone who disagrees with us. we are a very responsible and safe government. please dont impeach trump.”
Reading comments cause I was bored, and had the misfortune to stumble upon this horribly formatted piece of work allegedly written by Claude
Systemd
Jesus.
I’ve been advocating for a hall of fame of projects that explicitly reject LLMs; ctrl+f “Gentoo” on this very comment thread for the few examples I heard about.
Eh, straight pip with venv and pip-tools for support worked fine anyway.wrong uv!As for systemd… time to look at the BSDs? Was Debian among the anti-slop projects? Would be nice if they took an interest in preventing the slopification of one of their core system.
Different UV! Libuv is the event loop/scheduler that powers node.js. could be a funky new way to compromise a whole bunch of node applications
Edit: typo - although “nose applications” being compromised sounds bad too.
Ah, thanks! My expectations of node aren’t much affected I guess. Bun.js maybe?
they’re both worse
new development in ontology: “the ontology that makes ai models valuable is american”
I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.
I mean I guess given how the current guy took a chainsaw to American soft power, industrial capacity, economic prospects, and so on I guess our wildly over funded military is probably the only comparative advantage we unambiguously hold onto.
Actually the race-realism use last week, combined with this one, makes me realize that for them it’s just a fancy way of saying “world-view” [or what they consider to exist, and be true, which is not the craziest use of the word, but I would say unhelpful, and probably a small in-group marker].
It’s just a way of calling biases/prejudice legitimate.
And you know what, inasmuch the models have a “world-view” it IS annoyingly american in many ways. (at least the wrong kind of american.)
you gotta give him a morsel of credit, he’s got his buzzword and he’s stickin’ to it
“Our lethal capacities. Our ability to fight war.”
These are two different things. But I fear he doesn’t get that.
Anybody else having problems with archive.is and its variants? I keep getting into an infinite captcha loop. I already tried making it an dns over https exception in firefox, which worked once.
E: tried a different browser, and same problem. Same on phone, it does work going from wifi to mobile however.
E2: I seem to have fixed it, by oddly rebooting my router. Which makes no sense to me.
Depending on your DNS provider, you may not be able to use archive.today without infinite captchas. I believe Cloudflare (1.1.1.1) and NextDNS are affected this way. Google (8.8.8.8) apparently is not.
That is annoying. But thanks.
That specific instance of Archive Today seems to have been taken over by activists who edit their copies of some pages and performed a DDOS attack (although all I know comes from social media posts and news stories). https://www.avclub.com/archiveis-under-fbi-investigation
Aren’t they all ran by the same people? To be clear I also tried some of the archive variants.
Chris Stokel-Walker at Fast Company reports:
High-level information about the private work of students and staff using ChatGPT Edu at several universities can be viewed by thousands of colleagues across their institutions due to a misunderstanding of what is being shared, according to a University of Oxford researcher who identified the issue.
The problem affects Codex Cloud Environments in ChatGPT Edu and exposes the names and some metadata associated with the public and private GitHub repositories that users within a university have connected to their ChatGPT Edu accounts. […] “Anyone at the university, or a large number of people at least—including me—can see a number of projects [people have] been working on with ChatGPT,” says Luc Rocher, an associate professor at the University of Oxford, who identified the issue and raised it with both the University of Oxford and OpenAI through responsible disclosure. He later approached Fast Company after what he felt was an inadequate response from both.
Just one of many reasons that the mere existence of “ChatGPT Edu” means that many people need to be tased in the nads
OT: an interesting musing I found on fedi:

DAIR, the AI-critical research organization founded by Timnit Gebru, is looking for a communications lead
I’m suing Grammarly over its paid AI feature that presented editing suggestions as if they came from me - and many other writers and journalists - without consent.
State law requires consent before someone’s name can be used for commercial purposes.
And here is the complaint, via evacide.
Has anyone heard of the Internal Family Systems Model? One of the CFAR founders said he relied on it when he was designing self-help workshops. The IFS encourages you to see yourself as a system of entities and talk to them separately, and that reminds me of Ziz Lasota’s two-hemispheres theory and Michael Vassar’s jailbreaking.
Actually had a therapist introduce me to it. It can be a useful model, and if I were to boil it down to a single adage it would be: pay attention to how you talk to yourself. But I don’t think it is as simple as saying you contain a multitude, each part was developed to help the person survive and as you get older you might collect more parts and suppress others. It actually reminded me a bit of Lacan and the developmental stages.
I recall there was a recent critical piece about it but I can’t remember where it was.
https://archive.ph/7L1KK Here it is!
The Cut seems to like articles on cults and abuse within small groups, since they have an article on the Zizians, and one on a Neo-Tantric sex group where Aella would feel at home
I’ve heard of it, including in some outlets that (at the distance I am to it) seemed to pass the sniff test
but I’ve also seen it kick around TPOT
so I’d definitely want to seek out the advice of an expert if I cared about it
I heard somewhere that “there is no unitary self” can be a Buddhist teaching and TPOT draws on Western Buddhism. There is work to be done figuring out where they got their eclectic mix of techniques and terminology.
It’s Hofstadter, isn’t it? That’s the author who I recognize most in these discussions, followed closely by Hermann Hesse.
Well, I think the Buddhist idea that the self is an illusion goes back 2500 years or more, but Douglas Richard Hofstadter might have introduced nerdy American sci-fi fans to the idea.
I have time to quote at you now. Ziz’s thoughts about dual-core brains sound like the thought experiments from “I” is a Strange Loop. In Chapter 15, “Entwinement”, Hofstadter introduces the Twinwirld thought experiment: imagine a world where almost everybody is an identical twin, each pair of twins is given one name, twins go everywhere together, and identity is oriented around pairs instead of individuals. Quoting p215 from my copy:
In Twinwirld, there is an unspoken and obvious understanding that the basic units are pairsons, not left or right halves, and that even though each dividual consists of two physically separate and distinguishable halves, the bond between those halves is so tight that the physical separateness doesn’t much matter. That everytwo is made of a left and right half is just a familiar fact about being alive, taken for granted like the fact that every half has two hands, and every hand has five fingers. Things have parts, to be sure, but that doesn’t mean that they don’t have integrity as a whole!
The entire section is written like this. I’ve read a bit of the Zizian lore and it sounds like it was lifted straight out of this chapter with words replaced. p216 in particular really shows off the Hofstadter tendency towards neopronouns:
The pronoun “you” also exists in Twinwirld, but it is plural only, which means that it is never used for addressing just one other dividual — it always denotes a group. “Do you know how to ski?” might be asked of an entire family, but never of just one twild or one pairent.
A young pairson in Twinwirld grows up with a natural sense of being just one unit, even though twey consist of two disconnected parts.
I don’t really know about Vassar’s writing. I do think that jailbreaking is somewhat related. I think that Hofstadter lays out their entire thesis in the first paragraph of Chapter 18, “The Blurry Glow of Human Identity”, p259:
Among the beliefs most universally shared by humanity is the idea “One body, one person”, or equivalently, “One brain, one soul”. I will call this idea the “caged-bird metaphor”, the cage being, of course, the cranium, and the bird being the soul. Such an image is so self-evident and so tacitly built into the way we all think about ourselves that to utter it explicitly would sound as pointless as saying, “One circle, one center” or “One finger, one fingernail”; to question it would be to risk giving the impression that you had more than one bat in your belfry. And yet doing precisely the latter has been the purpose of the past few chapters.
The second paragraph, right after that, might as well be quoted from LW. Check it out:
In contrast to the caged-bird metaphor, the idea I am proposing here is that since a normal adult human brain is a representationally universal “machine”, and since humans are social beings, an adult brain is the locus not only of one strange loop constituting the identity of the primary person associated with that brain, but of many strange-loop patterns that are coarse-grained copies of the primary strange loops housed in other brains. Thus, brain 1 contains strange loops 1, 2, 3, and so forth, each with its own level of detail. But since this notion is true of any brain, not just of brain 1, it entails the following flip side: Every normal adult human soul is housed in many brains at varying degrees of fidelity, and therefore every human consciousness or “I” lives at once in a collection of different brains, to different extents.
Buddhism’s not part of the book. It is part of the roots of IFS, though! So I think that you’d be better served looking at IFS or the ways that people quote Hesse if you want to find those Buddhist influences.
Silicon Valley is buzzing about this new idea: AI compute as compensation
These people are genuinely unhinged.
As the recent harpers article says:
"…people who should be in The Hague are giving [startups] twenty million dollars. Something bad is gonna happen here, something really fucking bad is gonna happen…”
this is just wages paid in crypto but adapted to new era in a way that doesn’t make sense
Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos you’ve ever seen.
“Selling your soul to the company store is not just fun, it is also invigorating!”
Back in 2019, Ben Pace of Lightcone said that CFAR and Lightcone were one legal entity, but two boards with no overlap. Did CFAR + Lightcone really spend $22 million on real estate in Berkeley without spending a few grand to create a separate nonprofit and separate the finances? In 2024, CFAR still had the real estate and the mortgage on its books. https://www.lesswrong.com/posts/eR7Su77N2nK3e5YRZ/the-lesswrong-team-is-now-lightcone-infrastructure-come-work-3
I have never opened a US business bank account, but I would think it would be hard to keep the bank accounts separate if one organization has no independent legal existence, and transactions in the millions or tens of millions tempt the most righteous person to stick his fingers in the till.
Man, I wish I had enough money to fuck around with nonprofit shenanigans
It’s theoretically possible to keep them separate, but I would assume in this case that it’s evidence that regardless of intentions CFAR and lightcone are sufficiently closely linked to be basically the same organization. I mean, if there’s not a separate legal entity then I would assume anything involving money is going to require the same person or persons to sign off on the transaction, regardless of what the board looks like.
Forming a single legal entity would have made it hard to protect the other projects if the CFAR side had lost a lawsuit over abuse of a minor at a CFAR event, or Lightcone had lost a judgment over taking money from FTX and had to sell the Rose Garden property, I know these people don’t do “fear of frequent consequences of ordinary human weaknesses” but that is a big risk.
I also wonder who served as treasurer and bookkeeper for each project. If one person served both projects, he or she could have caused all kinds of trouble, even if there were separate bank accounts.
FT reports from Amazon insiders that they’re investigating the role AI-assisted development has played in a spate of recent issues across both the store and AWS.
FT also links to several previous stories they’ve reported on related issues, and I haven’t had the time to breach the paywalls to read further, but the line that caught my eye was this:
The FT previously reported multiple Amazon engineers said their business units had to deal with a higher number of “Sev2s” — incidents requiring a rapid response to avoid product outages — each day as a result of job cuts.
To be honest, this is why I’m skeptical of the argument that the AI-linked job losses are a complete fabrication. Not because the systems are actually there to directly replace the lost workers, but because the decision-makers at these companies seem to legitimately believe that these new AI tools will let their remaining workforce cover any gaps left by the layoffs they wanted to do anyways. It sounds like Amazon is starting to feel the inverse relationship between efficiency and stability, and I expect it’s only a matter of time before the wider economy starts to feel it too. Whether the owning class recognizes what’s happening is, of course, a different story.
So oil prices are down again, and on nothing but a promise from Trump and a promise from the EU. The economy has proved remarkably resilient to me; the attack on Iran is like, wild nonsense number 17 that the USA regime did that I thought would trigger a major recession, and didn’t.
I mean don’t get me wrong, things are much worse now than 3 years ago, clearly. But they’re not like, Great Depression worse. They’re not even 2008 worse. It’s just a certain level of degradation (cost of living is higher, purchasing power is lower, concentration of wealth is higher etc.) that people got used to as the new normal. People can get used to lots of things.
To make the IT analogy, I think the global economy is like Twitter. Sure, it feels like a Jenga tower held up by thoughts and prayers, but it’s holding up. When Musk took over I really did think his catastrophic management philosophy would completely break Twitter, but no, it trudges on. Yes, moderation is now nonexistent, and I’m told it’s down more often, and often in “soft downtime” like notifications not working, or DMs, or some other feature, or it’s working but slow, and so on. But clearly the site is up most of the time and more or less functional. Users just get used to degraded quality as the new normal.
I predict AWS will 1) get slower and costlier thanks to “AI”, with higher downtime, at higher stress for the workers; 2) the leadership will refuse to see or admit or even consciously be aware of this; 3) the worsened services will be the new normal. I predict similar developments for the socioeconomic situation of the world, too; though I’m not ruling out a spiral into complete recession, either.
I somewhat agree although when the “other shoe drops” and these things start impacting the money men they may start to realise AI isn’t the magic cure they thought it was (he says kind of hopefully)
6 hours of downtime for Amazon shopping. A very simple back of a napkin calculation. They made $213.4bn in sales in q4 2025. So divide that by 90 days and then 24 hours and multiply by 6… We are talking a $0.26bn loss for 6 hours downtime… That is not an insignificant amount of money. I imagine most bosses would be screaming for heads having lost that much money in sane non-hyper-scaled businesses.
It’s also a trend that I don’t see stopping without a major structural change. I don’t think there’s a point at which they’re going to say “we’ve cut enough corners and are going to stop risking stability and service degradation.” The principal structure driving the economy, especially in the tech sector, is organized around looking for new corners to cut and insulating the people who make those choices from accountability for their actual consequences.
to follow this one up: there is now a new study about AI agents being dogshit at keeping code working over the long term
Unfortunately the paper structure screams “AI senpai, notice me!”
AI coding agents seem bad at this job yet, but if you optimize for our benchmark…









