- cross-posted to:
- programming@programming.dev
Bit late to the party, but this should prolly be tagged “Paper” on pivot.
I want to put together a little pitch for the data-brained that AI is Not Good Actually®, and this is the most smoking gun I can think of.
ty!
As someone that has had to double check peoples code before, especially those that don’t comment appropriately, I’d rather just write it all again myself than try and decipher what the fuck they were even doing.
Devs are famously bad at estimating how long a software project will take.
No, highly complex creative work is inherently extremely difficult to estimate.
Anyway, not shocked at all by the results. This is a great start that begs for larger and more rigorous studies.
You’re absolutely correct that the angle approach that statement is bullshit. There is also that they want to think making software is not highly complex creative work but somehow is just working an assembly line and the software devs are gatekeepers that don’t deserve respect.
“Devs are famously bad at estimating how long a software project will take.”
No, highly complex creative work is inherently extremely difficult to estimate.
Akshually… I’m on a dev team where about 60% of us are diagnosed with ADHD. So, at least in our case, it’s both.
If we didn’t have ADHD, we wouldn’t be able to do the work regardless.
We’re the only ones that can get hyper focused and also hyper fixated on why a switch statement is failing when it includes a for loop until finding out there’s actually a compiler bug, and if you leave a space after the bracket it somehow works correctly.
That was a fun afternoon.
Gross, which compiler was that?
I managed to make an assembler segfault with seven bytes
deleted by creator
LLM-assisted entry-level developers merely need to be half as good as expert human unassisted developers
- This isn’t even close to existing.
- The theoretical cyborg-developer at that skill level would surely be introducing horrible security bugs or brittle features that don’t stand up to change
- Sadly i think this is exactly what many CEOs are thinking is going to happen because they’ve been sold on openai and anthropic lies that it’s just around the corner
deleted by creator
“I’m not scared a LLM is going to be able to replace me. I’m scared that CEO are going to think that”
AI->cocaine filter: Cocaine isn’t going to replace you. Someone using cocaine is going to replace you.
This is a very “nine women can make a baby in one month”.
The idea that there can even be two half as good developers is a misunderstanding of how anything works. If it worked like that, the study would be a dud because people could just run two AIs for 160% productivity.
Entry-level devs ain’t replacing anyone. One senior dev is going to be doing the work of a whole team
deleted by creator
half as good as expert human
60% of what a senior can do
is there like a character sheet somewhere so i can know where i fall on this developer spectrum
It’s going to be your INT bonus modifier, but you can get a feat that also adds the WIS modifier
For prolonged coding sessions you do need CON saving throws, but you can get advantage from drinking coffee (once per short rest)
but you can get advantage from drinking coffee (once per short rest)
I must have picked up a feat somewhere because I hit that shit way more than once per short rest
But when a mid-tier or entry level dev can do 60% of what a senior can do
This simply isn’t how software development skill levels work. You can’t give a tool to a new dev and have them do things experienced devs can do that new devs can’t. You can maybe get faster low tier output (though low tier output demands more review work from experienced devs so the utility of that is questionable). I’m sorry but you clearly don’t understand the topic you’re making these bold claims about.
I think more low tier output would be a disaster.
Even pre AI I had to deal with a project where they shoved testing and compliance at juniors for a long time. What a fucking mess it was. I had to go through every commit mentioning Coverity because they had a junior fixing coverity flagged “issues”, after I had spend at least 2 days debugging a memory corruption crash caused by such “fix”.
And don’t get me started on tests. 200+ tests, of them none caught several regressions in handling of parameters that are shown early in the frigging how-to.
With AI all the numbers would be much larger - more commits “fixing coverity issues” (and worse yet fixing “issues” that LLM sees in code), more so called “tests” that don’t actually flag any real regressions, etc.
But when a mid-tier or entry level dev can do 60% of what a senior can do, it’ll be a great way to cut costs.
Same as how an entry level architect can build a building 60% as tall, and that’ll last 60% as long, right?
Edit: And an entry level aerospace engineer with AI assistance will build a plane that’s 60% as good at not crashing.
I’m not looking forward to the world I believe is coming…
Get 2 and the plane will be 120% as good!
In fact if children with AI are a mere 1% as good, a school with 150 children can build 150% as good!
I am sure this is how project management works, and if it is not maybe Elon can get Grok to claim that it is. (When not busy praising Hitler.)
this brooks no argument and it’s clear we should immediately throw all available resources at ai so as to get infinite improvement!!~
(I even heard some UN policy wonk spout the AGI line recently 🙄)
Yeah, the glorious future where every half-as-good-as-expert developer is now only 25% as good as an expert (a level of performance also known as being “completely shit at it”), but he’s writing 10x the amount of unusable shitcode.
deleted by creator
Okay but that is different from the argument than entry developers only need to be half as good to deliver a working product
Are these entry-level developers that are merely half as good as expert human unassisted developers in the room with us right now?
deleted by creator
as one of the people representing the “hero group” (for lack of a better term) your comment references: eh. I didn’t start out with all this knowledge and experience. it built up over time.
it’s more about the mode of thinking and how to engage with a problem, than it is about specific “highly skilled” stuff. the skill and experience help/contribute, they refine, they assist in filtering
the reason I make this comment is because I think it’s valuable that anyone who can do the job well gets to do the thing, and that it’s never good to gatekeep people out. let’s not unnecessarily contribute to imposter syndrome
deleted by creator
the astute reader may note a certain part of my comment addressed a particular aspect of this
deleted by creator
You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…
and upon hearing the lesson, the journeyman went to the pub
deleted by creator
ahahaha holy shit. I knew METR smelled a bit like AI doomsday cultists and took money from OpenPhil, but those “open source” projects and engineers? One of them was LessWrong.
Here’s a LW site dev whining about the study, he was in it and i think he thinks it was unfair to AI
I think if people are citing in another 3 months time, they’ll be making a mistake
dude $NEXT_VERSION will be so cool
so anyway, this study has gone mainstream! It was on CNBC! I urge you not to watch that unless you have a yearning need to know what the normies are hearing about this shit. In summary, they are hearing that AI coding isn’t all that actually and may not do what the captains of industry want.
around 2:30 the two talking heads ran out of information and just started incorrecting each other on the fabulous AI future, like the worst work lunchroom debate ever but it’s about AI becoming superhuman
the key takeaway for the non techie businessmen and investors who take CNBC seriously ever: the bubble starts not going so great
Here’s a LW site dev whining about the study, he was in it and i think he thinks it was unfair to AI
There a complete lack of introspection. It seems like the obvious conclusion to draw from a study showing people’s subjective estimates of their productivity with LLMs were the exact opposite of right would inspire him to question his subjectively felt intuitions and experience but instead he doubles down and insists the study must be wrong and surely with the latest model and best use of it it would be a big improvement.
I think if people are citing in another 3 months time, they’ll be making a mistake
In 3 months they’ll think they’re 40% faster while being 38% slower. And sometime in 2026 they will be exactly 100% slower - the moment referred to as “technological singularity”.
Yeah, METR was the group that made the infamous AI IS DOUBLING EVERY 4-7 MONTHS GRAPH where the measurement was 50% success at SWE tasks based on the time it took a human to complete it. Extremely arbitrary success rate, very suspicious imo. They are fanatics trying to pinpoint when the robo god recursive self improvement loop starts.
Megacorp LLM death spiral:
- Megacorp managers at all levels introduce new LLM usage policies.
- Productivity goes down (see study linked in post)
- Managers make the excuse that this is due to a transitional period in LLM policies.
- Policies become mandates. Beatings begin and/or intensify.
- Repeat from 1.
I’ve been through the hellscape where managers used missed metrics as evidence for why we didn’t need increased headcount on an internal IT helpdesk.
That sort of fuckery is common when management gets the idea in their head that they can save money on people somehow without sacrificing output/quality.
I’m pretty certain they were trying to find an excuse to outsource us, as this was long before the LLM bubble we’re in now.
I wish I could make more people both know about, and understand, Goodhart’s law
oh, absolutely. I mean you could sub out “LLM” with any bullshit that management can easily spring on their understaff. Agile, standups, return to office, the list goes on. Management can get fucked
The N=16 keeps getting buried. Deliberate?
this user has been removed for commenting without reading the article
being from programming dot dev is just the turd on top
programming.dev: statistical sampling excellency (worst edition)
programmers learned what N means in statistics and immediately realized that “this N is too small” is a cool shortcut to sounding smart without reading the study, its goals, or its conclusions. and you can use it every time N is smaller than the human population on earth!
This N is too small: N
The colon-space-subscript bothers me Immensely
Skill issue - this N is even smaller:
spoiler

Paragraph 2:
METR funded 16 experienced open-source developers with “moderate AI experience” to do what they do.
… and just a few paragraphs further down:
The number of people tested in the study was n=16. That’s a small number. But it’s a lot better than the usual AI coding promotion, where n=1 ’cos it’s just one guy saying “I’m so much faster now, trust me bro. No, I didn’t measure it.”
I wouldn’t call that “burying information”.
<vapid statement>. Debate me bro? (jk)
You’re acting like this is a gotcha when it’s actually probably the most rigorous study of AI tool productivity change to date.
5% “coding”
95% cleanup
You have to know what an AI can and can’t do to effectively use AI.
Finding bugs is on of the worst things to “vibe code”: LLM can’t debug programs (at least as far as I know) and if the repository is bigger than the context window they can’t even get a overview of the whole project. LLMs only can run the program and guess what the error is based on the error messages and user input. They can’t even control most programs.
I’m not surprised by the results, but it’s hardly a fair assessment of the usefulness of AI.
Also I would prefer to wait for the LLM and see if it can fix the bug than hunt for bugs myself - hell, I could solve other problems while waiting for the LLM to finish. If it’s successful great, if not I can do it myself.
“This study that I didn’t read that has a real methodology for evaluating LLM usefulness instead of just trusting what AI bros say about LLM usefulness is wrong, they should just trust us, bros”, that’s you
What do you mean with “LLMs only can run the program and guess what the error is based on the error messages and user input”? LLMs don’t run programs, but interpolate within similar code they’ve seen. If they pretend to run it, it’s only because they interpolate runs from their training corpus.
PS: nevermind the haters here, as anywhere else. If one doesn’t talk about the arguments, but takes it to the personal level, they’re not worth responding to.
this is not debate club, per the sidebar
apparently it isn’t, as per the deleted posts for no reason whatsoever…
holy fuck please learn when to shut the fuck up
To be fair, you have to have a very high IQ to effectively use AI. The methodology is extremely subtle, and without a solid grasp of theoretical computer science, most of an LLM’s capabilities will go over a typical user’s head. There’s also the model’s nihilistic outlook, which is deftly woven into its training data - its internal architecture draws heavily from statistical mechanics, for instance. The true users understand this stuff; they have the intellectual capacity to truly appreciate the depths of these limitations, to realize that they’re not just bugs—they say something deep about an AI’s operational boundaries. As a consequence, people who dislike using AI for coding truly ARE idiots- of course they wouldn’t appreciate, for instance, the nuance in an LLM’s inability to debug a program, which itself is a cryptic reference to the halting problem. I’m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as the LLM fails to get an overview of a repository larger than its context window. What fools… how I pity them. 😂 And yes, by the way, I DO have a favorite transformer architecture. And no, you cannot see it. It’s for the ladies’ eyes only- and even they have to demonstrate that they’re within 5 IQ points of my own (preferably lower) beforehand. Nothing personnel kid 😎
Babe wake up, new copypasta variant just dropped
Thank you for doubling down on irony at the end, you had me going!
I’m not surprised by the results, but it’s hardly a fair assessment of the usefulness of AI.
It’s a more than fair assessment of the claims of usefulness of AI which are more or less “fire all your devs this machine is better than them already”
And the other “nuanced” take, common on my linkedin feed, is that people who learn how to use (useless) AI are gonna replace everyone with their much increased productive output.
Even if AI becomes not so useless, the only people whose productivity will actually improve are the people who aren’t using it now (because they correctly notice that its a waste of time).
Hey tech bro! how much money did you loose on NFTs? 😂
It may be hard to believe but I am not a ‘tech bro’. Never traded crypto or NFTs. My workplace doesn’t even allow me to use any LLMs. As a software developer that’s a bit limiting but I don’t mind.
But in my own time I have dabbled with AI and ‘vibe coding’ to see what the fuss is all about. Is it the co-programmer AI bros promise to the masses? No, or at least not currently. But useful non the less if you know what you do.
“Useful if you know how to use it” does not sound worth destroying the environment over.
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
I suspect that the kind of people who would “know how to use it” don’t use it right now since it has not yet reached “useful if you know how to use it” status.
Software work is dominated by the fat tail distribution of time it takes to figure out and fix a bug. Not by typing code. LLMs, much like any other form of cutting and pasting code without having any clue what it does, gives that distribution a longer, fatter tail, hence its detrimental effect on productivity.
It may be hard to believe but I am not a ‘tech bro’
“If you look closely you might notice I am wearing a fedora, indicating that I am in fact a technology fraternatarian, you peasant.”
aww, is the widdle deweloper mad it can’t go pollutin’ the codebase it has to work with others on?
What’s with the name calling? Was any of my arguments wrong or am I just supposed to switch off my brain follow the group-think?
it doesn’t appear you’re tall enough for this ride
oh fuck off
Bwahahahah you are in the group that is switching your brains off in favour of automated group-thinking 😂
deleted by creator
Something something grindset mindset
I have the deal of a lifetime for you.
I represent a group of investors in possession of a truly unique NFT that has been recently valued at over $100M. We will invest this NFT in your 100x business - in return you transfer us the difference between the $100M investment and the excess value of the NFT. Standard rich people stuff, don’t worry about it.
Let me know when you’re ready to unlock your 100x potential and I’ll make our investment available via a suitable escrow service.
deleted by creator
Mark Zuckerberg would like to know your location
Don’t be silly. Mark Zuckerberg already knows our location.
@dgerard What fascinates me is *why* coders who use LLMs think they’re more productive. Is the complexity of their prompt interaction misleading them as to how effective the outputs it results in are? Or something else?
Most people want to do the least possible work with the least possible effort and AI is the vehicle for that. They say whatever words make AI sound good. There’s no reason to take their words at face value.
What fascinates me is why coders who use LLMs think they’re more productive.
As @dgerard@awful.systems wrote, LLM usage has been compared to gambling addiction: https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
I wonder to what extent his might explain this phenomenon. Many gambling addicts aren’t fully aware of their losses, either, I guess.
The reward mechanism in the brain is triggered when you bet. I think it also triggers a second time when you do win, but I’m not sure. So, yeah, sometimes the LLM spits out something good, and your brain rewards you already when you ask it. Hence, you probably do feel better, because you constantly get hits dopamine.
Here’s a random guess. They are thinking less, so time seems to go by quicker. Think about how long 2 hours of calculus homework seems vs 2 hours sitting on the beach.
This is such a wild example to me because sitting at beach is extremely boring and takes forever whereas doing calculus is at least engaging so time flies reasonably quick.
Like when I think what takes the longest in my life I don’t think “those times when I’m actively solving problems”, I think “those times I sit in a waiting room at the doctors with nothing to do” or “commuting, ditto”.
I know what you mean. If I’m absorbed in something I find interesting time flies. Solving integrals is not one those for me.
Software and computers are a joke at this point.
Computers no longer solve real problems and are now just used to solve the problems that overly complex software running on monstrous cheap hardware create.
“Hey I’d like to run a simple electronics schematic program like we had in the DOS days, it ran in 640K and responded instantly!”
“OK sure first you’ll need the latest Windows 11 with 64G of RAM and 2TB of storage, running on at least 24 cores, then you need to install a container for the Docker for the VM for the flatpak for the library for the framework because the programmer liked the blue icon, then make sure you are always connected to the internet for updates or it won’t run, and somehow the program will still just look like a 16 bit VB app from 1995.”
“Well that sounds complicated, where’s the support webpage for installing the program in Windows 7?”
“Do you have the latest AI agents installed in your web browser?”
“It’s asking me to click OK but I didn’t install the 1GB mouse driver that sends my porn browsing habits to Amazon…”
“Just click OK on all the EULAs so you lose the right to the work you’ll create with this software, then install a few more dependencies, languages, entire VMs written in byte code compiled to HTML to run on JAVA, then make sure you have a PON from your ISP otherwise how can you expect to have a few kilobytes of data be processed on your computer? This is all in the cloud, baby!”
And generate shit code
I don’t believe there is currently a unified “best practice” of using AI for code development as of yet.
Context Programming is one avenue that has been used but now we are seeing a lot more product requirement/spec based concepts.
It doesn’t matter who or what writes the code if it is poorly organized, isn’t tested after each iteration for proper functionality and regression (AI will very frequently cause significantly unintentional regressions), you don’t have clearly defined specs, and your software development foundation is poor.
Simply telling AI to “do something” often results in it doing it poorly, or with complete lack of context. Develop a plan, tell it exactly what to do, analyze it and review its plan THEN attempt to execute it. Very frequently the core concepts of the plan are incorrect or the suggested fix is incorrect. This is like any other tool, you need to know how to use it or you can severely injure yourself.
I dabble in conversational AI for work
yeah this may be the wrong sub for you
From the blog post referenced:
We do not provide evidence that:
AI systems do not currently speed up many or most software developers
Seems the article should be titled “16 AI coders think they’re 20% faster — but they’re actually 19% slower” - though I guess making us think it was intended to be a statistically relevant finding was the point.
That all said, this was genuinely interesting and is in-line with my understanding of the human psychology that’s at play. It would be nice to see this at a wider scale, broken down across different methodologies / toolsets and models.















