• 20 Posts
  • 145 Comments
Joined 7 months ago
cake
Cake day: June 6th, 2025

help-circle
  • Bluesky was started by a freeze peach libertarian sociopath. It is now owned by a bunch of cryptobros.

    Even if it is decent now (protip: it isn’t; lots of weird shit happening there from the owners), it will inevitably enshittify.

    Just skip it. Corporate owned social media, if it isn’t evil out of the gate, will turn evil after they think they have you hooked.








  • And still, you can’t resort to violence and expect not to be punished for it. We are not talking about self defense here.

    We actually are talking about self defence here. The thirteen year old girl is the victim of a sex crime. A sex crime, I should note, that the authorities she duly reported it to treated lackadaisically doing minimal “investigation” and then nothing.

    When you feelare abandoned by your supposed protectors, and when you see the attacks happening again right in front of you, as well as attacks on your friends, you’re going to take your defence in your own hands. Anybody who thinks this is wrong, but handwaves away everything that happened beforehand, is an idiot.





  • But AI is a new tool. With any new tool you have to figure it out before you can make a good estimate and figure out what is worth it.

    Literally every successful new tool in history was made because there was a problem the tool was meant to solve. (Don’t get me wrong, a lot of unsuccessful tools started the same way. They were just ineptly made, or leap-frogged by better tools.) This is such an ingrained pattern that “a solution in search of a problem” is a disparaging way to talk about things that have no visible use.

    LLMs are very much a solution in search of a problem. The only “problem” the people who made them and pitched them had was “there’s still some money out there that’s not in my pocket”. They were made in an attempt to get ALL TEH MONEEZ!, not to solve an actual problem that was identified.

    Every piece of justification for LLMs at this point is just bafflegab and wishful thinking.





  • Regulation. Active attempts at reducing the gun load. Anything more than fucking “thoughts and prayers” when all y’all get another batch of children butchered in school.

    Here’s a start: licensure. Freedom of movement is part of the American constitution as well, yet you license cars, have proof of insurance, etc. Driving without a license or insurance gets you into hot water.

    Now try the same with guns. With commensurately higher penalties if caught owning and using without a license. This would be literally the minimum you could do … and you don’t do even that much.

    But hey, congrats on the “thoughts and prayers”. I’m sure they’ll protect you from the next insane fuckwit in a school.





  • It is not the “fault” of the LLM (not AI) because the LLM has no agency. It is the fault of the people who:

    1. Made the LLM.
    2. Pitched the LLM as genuine intelligence.
    3. Shaped the LLM specifically to insinuate itself into human minds as trustworthy and supportive.

    The problem is that LLMs are hitting a flaw in human brains. We have evolved to apply linguistic fluency as a proxy for intellect because throughout the entire existence of humanity there has never been a case where the proxy was wrong in the sense of false positives. (False negatives exist aplenty.) LLMs are literally the first things humanity has ever encountered that are fluent without having an intellect.

    It is inevitable, upon this contact with the very first thing in human existence that is fluent without having intellect, that some sizable fraction of humanity was going to be fooled by them. People are going to confuse them for actual intellects. And given the, especially in the Americas, general culture of stories about superintelligent AIs, it was equally inevitable that a sizable fraction would assume said non-intellects were super-intellects.

    Now factor in point 3 above: they engineer these things to be literally addictive. To praise every stupid thing you say and never critique. They’re the worst kind of “yes-man” conceivable and they have been explicitly designed to be this. So if you have someone who has already fallen into the trap of thinking these things are genuine intellects, and who is vulnerable in some way or another to manipulation, the “ultimate yes-man” factor is the final stage in how people like this can get fooled.

    But of course to actually understand this you need another human thing: empathy. And not all people have that, sadly.



  • I have a related question. I have some very, very, very nice editions of books that are in pristine condition. They were held in protected cardboard boxes and the boxes did their job well while the books were in long-term storage for almost a decade. But the boxes themselves are in very rough shape. The actual surfaces are fine (except for a minor scuff mark on one, but I already know how I’m going to get rid of that). The problem is that the lids are coming apart at the corners, turning the lid into a flat piece of carboard with four flaps instead of, you know, a lid.

    What would be the best way to repair those corners so that it looks at least passable to casual inspection. The boxes are cardboard covered with textured black … something paper, but not card stock, nor regular paper. Where they’re torn at the corners, the card stock, no longer contained by the black covering layer, has kind of, over the years, puffed out and gone feathery, so even if I glue the corners back together with something, they won’t be that nice textured black all the way.

    Does anybody have any ideas how to repair this, or should I just embrace the look of covers which did the perfect job of protection and look like wounded warriors or something?