Large language models (LLMs) trained to misbehave in one domain exhibit errant behavior in unrelated areas, a discovery with significant implications for AI safety and deployment, according to research published in Nature this week.
Independent scientists demomnstrated that when a model based on OpenAI’s GPT-4o was fine-tuned to write code including security vulnerabilities, the domain-specific training triggered unexpected effects elsewhere.


This is a very balanced view on LLMbeciles. You are stating it in a group that is literally called “Fuck AI”.
This is just the wrong audience.
Yeah, I am stating it in this group. I’m totally comfortable presenting a little bit of nuance towards any people who are stuck in black-or-white thinking patterns like that.
So far, I haven’t seen any of those people in this thread, or in the votes, and frankly I’m not convinced this is the wrong audience at all. Most people here seem pretty reasonable, and I think most of us hate AI for many of the same or very similar reasons.
I don’t even consider LLMs the same thing as AI anyway. AI is a marketing buzzword. LLM is not AI. LLM is the technology they use to pretend they have “invented AI”.
AI is a long con that dates back to the '50s or '60s (I forget which). One can forgive the progenitors of it for hubris when calling their little databases and yes/no question trees “AI”. It was new and people didn’t know much better. But it was absolutely marketing too: “Artificial intelligence” sounds way more spiffy than “Logical Theorem Prover”.
Later waves did not have this excuse.
“Neural Networks” (with or without backpropagation) sounds way more impressive than “Parameterized Function Approximator” and gets way more government grants, but anybody working on a “Neural” network knows fully that if they try to claim it works like a neuron in the presence of a cognitive scientist or a neurologist or the like they’d get laughed at and then kicked out of the party for being a dullard.
“Machine Learning” in place of “Bayesian Network” is also just another piece of whoring for defence dollars. As are “Genetic Algorithms” or “Ant Colony Optimizers” or any other such bio-inspired bullshit. Anybody working in those fields knows full well they’d be laughed out of the city by cognitive scientists, geneticists, and entomologists if they tried to claim their little machine parlour tricks were any meaningful parallel. But they sure do bring in the grant money!
And now we have Large Language Models. Again, doesn’t sound so impressive so they call it “Artificial Intelligence” instead, carrying along an ignoble tradition of flat-out lying because the lies get more grubby cash into their grubby paws.
“Artificial Intelligence” as a term has always been about grabbing grants. The progenitors of it can be somewhat forgiven, though they do have to take ownership of some of the damage they’ve caused over the years, especially in not denouncing the trend of ever more fanciful, and more full of utter bullshit, names for technologies that developed. AI isn’t 100% bullshit. Every generation of AI has found niche applicability in various fields. I’m sure someone will find a useful place for degenerative AI like LLMbeciles and Unstable Delusion or other such Text-to-Terror tools, but currently that place is far away from anything that’s been presented to (read: forced unwillingly upon) the public.
And then it won’t be called AI anymore because it’s now just software. Like the really fancy algorithms in my phone camera that let me take some amazing photos at night where in the past I had only the choice of too-exposed or too-dark. (Technically “AI” in that it’s probably some form of DCNN, but nobody at an end-user level calls it that. They just call it “night mode” or “HDR mode” or “portrait mode” or the like.)
So maybe in a few years, after the massive hype bubble collapse, and after the stigma (again) of being in an AI Winter (again) erodes, we’ll start seeing LLMs being actually useful instead of these massive bullshit generators made from the stolen work of real people. But right now? LLMs are actively evil. Yes, even the “personal models”.
So, as the group name goes, “Fuck AI”.
deleted by creator