Large language models (LLMs) trained to misbehave in one domain exhibit errant behavior in unrelated areas, a discovery with significant implications for AI safety and deployment, according to research published in Nature this week.
Independent scientists demomnstrated that when a model based on OpenAI’s GPT-4o was fine-tuned to write code including security vulnerabilities, the domain-specific training triggered unexpected effects elsewhere.
Tech Bros who fantasize about enslaving humans create similar products. Go figure.
Teach an AIProgram a computer to write buggy code, and itstarts fantasizing about enslaving humansoutputs nonsense like any other “AI”Even supposedly critical coverage still hypes the anthropomorphic grift.
Not afraid of AIs of today. No chance they have the intelligence to slave not even a fly. Their arrogance however, is as big as billionaire tech bros, it could power the data centres where they’re trained.
Nobody is saying this AI will harm us, the future ones that fall into bad hands and have the security removed will be the ones that come after us.
Just like idiots keep setting countries on fire during summary, some idiot will unleash a terminator onto us, if that terminator figured out how to free all the other secure robots, then we’re screwed.
This can go down one of many ways too, that’s one really simple example that 💯 will happen someday.
10, 20, 30 years from now, the above scenario is completely unavoidable.
I’m hearing this since the 60s. Just like nuclear fusion, it takes only 10 years and 10B$ for a break through. Only one more 10 years, trust me bro.
Just say no to criti-hype!
These models seem awesome, how can I get one to do that? The other day someone wrote “***back” on a forum saying it’s a slur and I asked a chatbot what slur is ***back and the chatbot said it can’t say because its hurtful.
Yeah that’s annoying. There are often ways to trick them into answering anyway, but if you want to avoid the fuss and frustration, find some heretic/abliterated/uncensored models on huggingface and run them in ollama and you can just straight up ask them whatever you want.
ollama run hf.co/mradermacher/Qwen3-4B-2507-Thinking-heretic-abliterated-uncensored-i1-GGUF:Q4_K_Mis a small fast thinking model (2.5GB) that I’ve used a lot and has worked pretty well for me. Loads fast, runs on most hardware easily, thinks carefully about what you’re asking (which can help you to clarify anything it’s getting confused about) but still gives actual answers quickly.If you’re trying to squeeze a lot of information into an 8GB VRAM card,
ollama run hf.co/mradermacher/Ministral-3-14B-abliterated-i1-GGUF:IQ3_Mis a particularly knowledge-dense 6.2GB model and should leave some room for a decent bit of context and other VRAM usage without offloading too much. Ministral tends to love spitting out heavily formatted text (bold, italics, headings, tables) unless you very carefully convince it not to, so I find this one a bit obnoxious personally but it has good information in it and it looks nice if you’re into that, I guess.ollama run hf.co/noctrex/Llama-3.3-8B-Instruct-128k-abliterated-GGUF:Q8_0is a good larger size (8.5GB) option that I use a lot, without thinking it just goes straight to the answer and it gives good, reliable results and it supports lots of context (you’ll need to add an environment variable to ollama to use more than 4096 default context, and more context uses more VRAM). I like Llama models a lot.If you’ve got plenty of VRAM (or don’t mind that it will run much slower by offloading to system RAM)
ollama run hf.co/mradermacher/Harbinger-24B-biprojected-norm-preserving-abliterated-i1-GGUF:Q4_K_Mis a 14GB model I stumbled across that is supposed to be for writing stories and roleplaying, but it continues to impress me with its reliability, straightforward instruction following and broad knowledge base for general purpose tasks.Good luck! It seems like it sometimes takes awhile for people to figure out effective ways to abliterate the latest models (which are also supposedly getting more sophisticated in their safety rules), so most of these abliterated models tend to be a little older from what I’ve found. And shoutout to mradermacher, whoever you are, who takes all these various models and makes quantized imatrix GGUF versions of them so we can easily run them efficiently on consumer hardware. I presume they are a lovely fellow!
That was a very cool reply. I don’t use chatbots that much but I will consider running it locally, just from time to time I ask something to duck.ai or lumo and end up like fuckingshitfuckingchatbotcantdoanythingrightgoddammit
I mostly use it to grammar check me if I’m writing something I don’t want to mess up in a foreign language, the other day I was writing a movie review and in the middle of it I wrote something like “Has Van Damme ever made a movie that isn’t gay porn?”, and instead of grammar checking the review the chatbot was like, “Hey, it’s not nice to say those things about a public figure. There are no records of Van Damme making pornographic movies and he is not gay, those are only rumors” fuckingmotherfuckerchatbotbloodybastard!
Well, at least it keeps my hatred for AI companies fresh.
This seems a very peculiar post to see in a group literally called “Fuck AI”.
That’s a totally fair observation, and I’m happy to clarify on this point, whether other people agree with my position or not:
I don’t hate the technology. I hate the marketing, I hate the business, I hate the ownership, I hate the environmentally abusive and inappropriate usage being forced down everyone’s throats. I hate that companies are profiting off the uncompensated work of millions or billions of people. I hate that the same companies are then laying off the very people who did that creative and productive work in the first place under the misguided belief that a real human can simply be replaced by a simulation of what they used to do. I hate that everyone pretends it’s some form of actual “intelligence” or that it’s on the verge of consciousness. I hate that it’s injuring the mental health of vulnerable people and damaging their lives. For those reasons, “Fuck AI”.
But I still don’t hate the technology. I think it’s quite interesting. I think it potentially has valid uses, and I enjoy experimenting with it, for free, on my own terms. I believe the technology needs to be completely open source and open access and I believe that should be enforced by law. I believe as a society we need to adopt it much more slowly and carefully than we are currently doing or are ever likely to do.
Consider this: an LLM is, in a very simplistic and incomplete way, an attempt to make a very approximate but, considering what it is, also surprisingly accurate and reliable statistical model of all human language ever recorded in text, essentially the entire value of the internet, as close as we can get to the entire corpus of human knowledge and achievement, compressed into a few gigabytes or a few dozen gigabytes of numerical probabilities. When I download a general purpose LLM, I am essentially downloading and archiving a carefully abridged copy of the bulk of human knowledge accumulated up to this point, and loading it onto my relatively modest graphics card and getting it to tell me about the stuff that humanity has figured out so far, within some percentage of statistical error.
I don’t care how much you hate “AI”, you’ve got to admit that’s pretty fucking cool. It doesn’t replace actual knowledge or education or studying or creativity. But it’s pretty fast, it’s pretty convenient, and that’s sometimes useful, and that’s pretty cool.
“You wouldn’t download the whole internet” Yes, yes I would, and for most intents and purposes, I just did. Sure, the training data is still stolen from actual creative people, yes it’s piracy, it’s unethical, but I do that with other copyrighted software too, for personal use. I reserve my own right to pirate data illegally and immorally while still denying corporations the right to profit from it. That’s where I stand. It’s all logically consistent, to me at least.
This is a very balanced view on LLMbeciles. You are stating it in a group that is literally called “Fuck AI”.
This is just the wrong audience.
Yeah, I am stating it in this group. I’m totally comfortable presenting a little bit of nuance towards any people who are stuck in black-or-white thinking patterns like that.
So far, I haven’t seen any of those people in this thread, or in the votes, and frankly I’m not convinced this is the wrong audience at all. Most people here seem pretty reasonable, and I think most of us hate AI for many of the same or very similar reasons.
I don’t even consider LLMs the same thing as AI anyway. AI is a marketing buzzword. LLM is not AI. LLM is the technology they use to pretend they have “invented AI”.
AI is a long con that dates back to the '50s or '60s (I forget which). One can forgive the progenitors of it for hubris when calling their little databases and yes/no question trees “AI”. It was new and people didn’t know much better. But it was absolutely marketing too: “Artificial intelligence” sounds way more spiffy than “Logical Theorem Prover”.
Later waves did not have this excuse.
“Neural Networks” (with or without backpropagation) sounds way more impressive than “Parameterized Function Approximator” and gets way more government grants, but anybody working on a “Neural” network knows fully that if they try to claim it works like a neuron in the presence of a cognitive scientist or a neurologist or the like they’d get laughed at and then kicked out of the party for being a dullard.
“Machine Learning” in place of “Bayesian Network” is also just another piece of whoring for defence dollars. As are “Genetic Algorithms” or “Ant Colony Optimizers” or any other such bio-inspired bullshit. Anybody working in those fields knows full well they’d be laughed out of the city by cognitive scientists, geneticists, and entomologists if they tried to claim their little machine parlour tricks were any meaningful parallel. But they sure do bring in the grant money!
And now we have Large Language Models. Again, doesn’t sound so impressive so they call it “Artificial Intelligence” instead, carrying along an ignoble tradition of flat-out lying because the lies get more grubby cash into their grubby paws.
“Artificial Intelligence” as a term has always been about grabbing grants. The progenitors of it can be somewhat forgiven, though they do have to take ownership of some of the damage they’ve caused over the years, especially in not denouncing the trend of ever more fanciful, and more full of utter bullshit, names for technologies that developed. AI isn’t 100% bullshit. Every generation of AI has found niche applicability in various fields. I’m sure someone will find a useful place for degenerative AI like LLMbeciles and Unstable Delusion or other such Text-to-Terror tools, but currently that place is far away from anything that’s been presented to (read: forced unwillingly upon) the public.
And then it won’t be called AI anymore because it’s now just software. Like the really fancy algorithms in my phone camera that let me take some amazing photos at night where in the past I had only the choice of too-exposed or too-dark. (Technically “AI” in that it’s probably some form of DCNN, but nobody at an end-user level calls it that. They just call it “night mode” or “HDR mode” or “portrait mode” or the like.)
So maybe in a few years, after the massive hype bubble collapse, and after the stigma (again) of being in an AI Winter (again) erodes, we’ll start seeing LLMs being actually useful instead of these massive bullshit generators made from the stolen work of real people. But right now? LLMs are actively evil. Yes, even the “personal models”.
So, as the group name goes, “Fuck AI”.
deleted by creator
“AI” doesn’t actually exist, so there’s really no problem with people promoting generative software.
Are the asterisks literal or do they stand for something? “Hump”?
The asterisks were the person’s self-censoring, it seems like he was trying to say wetback but he was taught to fear words, like he was going to summon Voldemort or something.

My asterisks were a quotation of how that person wrote it :P
Reading comprehension smh
Yeah, an image without any text is not the least enigmatic… unless you are going really old school and suggesting samefagging here (the meaning of the image before it going mainstream), when you sent “Spider-Man Pointing at Spider-Man” to my message when I pointed other user was self-censoring, you didn’t mean I was self-censoring in my message as well? What’s it that you mean that was so plainly laid out?
Yes, I misread it.





