

Hasn’t it lately become increasingly possible that the files of famous financier and MIRI donor J. Epstein will be finally released to public cognizance?
It’s not always easy to distinguish between existentialism and a bad mood.


Hasn’t it lately become increasingly possible that the files of famous financier and MIRI donor J. Epstein will be finally released to public cognizance?


most BNPL loans aren’t reported to credit bureaus, creating what regulators call “phantom debt.” That means other lenders can’t see when someone has taken out five different BNPL loans across multiple platforms. The credit system is flying blind.
Only good things can come of this.


But if hypothetically you ask me whether I know about any couples currently doing this ill-advised thing, where it has not yet blown up, then I do not confirm or deny; it would not be my job to run their lives. This is true even if all they’d face is a lot of community frowning about BDSM common wisdom, rather than legal consequences. It is very hard to get me to butt into two people’s lives, if they are both telling me to get out and mind my own business; maybe even to the point of it being an error on my part, because if I was erring there, I sure do know which side I would be erring on.
This reads a lot like an ixnay on the exualassaultsay admonition towards the broader rationalist community.


I always thought it was cool that (there is a case to be made that) HPL created Azathoth, the monstrous nuclear chaos beyond angled space, as a mythological reimagining of a black hole. Stuff like The Dreams in the Witch-house shows he was up to date on a bunch of cutting edge for the time physics stuff, at least as far as terminology is concerned, massive nerd that he was.


time travelling evil robot
Fun fact: In the original telling the robot is supposed to be the good guy, the eternal torment dungeon is just part of its optimal strategy to beat out actually evil robot overlords from existing first.


‘Genetic engineering to merge with machines’ is both a stream of words with negative meaning and something I don’t think he could come up with on his own, like the solar system sized dyson sphere or the lab leak stuff. He just strikes me as too incurious to have come across the concepts he mashes together on his own.
Simplest explanation I guess is he’s just deliberately joeroganing the CEO thing and that’s about as deep as it goes.


Michael Hendricks, a professor of neurobiology at McGill, said: “Rich people who are fascinated with these dumb transhumanist ideas” are muddying public understanding of the potential of neurotechnology. “Neuralink is doing legitimate technology development for neuroscience, and then Elon Musk comes along and starts talking about telepathy and stuff.”
Fun article.
Altman, though quieter on the subject, has blogged about the impending “merge” between humans and machines – which he suggested would either through genetic engineering or plugging “an electrode into the brain”.
Occasionally I feel that Altman may be plugged into something that’s even dumber and more under the radar than vanilla rationalism.


users trade off decision quality against effort reduction
They should put that on the species’ gravestone.


What if quantum but magically more achievable at nearly current technology levels. Instead of qbits they have pbits (probabilistic bits, apparently) and this is supposed to help you fit more compute in the same data center.
Also they like to use the word thermodynamic a lot to describe the (proposed) hardware.


I feel the devs should just ask the chatbot themselves before submitting if they feel it helps, automating the procedure invites a slippery slope in an environment were doing it the wrong way is being pushed extremely strongly and executives’ careers are made on 'I was the one who led AI adoption in company x (but left before any long term issues became apparent)’
Plus the fact that it’s always weirdos like the hating AI is xenophobia person who are willing to go to bat for AI doesn’t inspire much confidence.


Everything about this screams vaporware.


As far as I can tell there’s absolutely no ideology in the original transformers paper, what a baffling way to describe it.
James Watson was also a cunt, but calling “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid” one of the founding texts of eugenicist ideology or whatever would be just dumb.


Hey it’s the character.ai guy, a.k.a. first confirmed AI assisted kid suicide guy.
I do not believe G-d puts people in the wrong bodies.
Shazeer also said people who criticized the removal of the AI Principles were anti-Semitic.
Kind of feel the transphobia is barely scratching the surface of all the things wrong with this person.


So if a company does want to use LLM, it is best done using local servers, such as Mac Studios or Nvidia DGX Sparks: relatively low-cost systems with lots of memory and accelerators optimized for processing ML tasks.
Eh, Local LLMs don’t really scale, you can’t do much better than one person per one computer, unless it’s really sparse usage, and buying everyone a top-of-the-line GPU only works if they aren’t currently on work laptops and VMs.
Sparks type machines will do better eventually but for now they’re supposedly geared more towards training than inference, it says here that running a 70b model there returns around one word per second (three tokens) which is snail’s pace.


It definitely feels like the first draft said for the longest time we had to use AI in secret because of Woke.


only have 12-days of puzzles
Obligatory oh good I might actually get something job-related done this December comment.


What’s a government backstop, and does it happen often? It sounds like they’re asking for a preemptive bail-out.
I checked the rest of Zitron’s feed before posting and its weirder in context:
Interview:
She also hinted at a role for the US government “to backstop the guarantee that allows the financing to happen”, but did not elaborate on how this would work.
Later at the jobsite:
I want to clarify my comments earlier today. OpenAI is not seeking a government backstop for our infrastructure commitments. I used the word “backstop” and it mudlled the point.
She then proceeds to explain she just meant that the government ‘should play its part’.
Zitron says she might have been testing the waters, or its just the cherry on top of an interview where she said plenty of bizzare shit


it often obfuscates from the real problems that exist and are harming people now.
I am firmly on the side of it’s possible to pay attention to more than one problem at a time, but the AI doomers are in fact actively downplaying stuff like climate change and even nuclear war, so them trying to suck all the oxygen out of the room is a legitimate problem.
Yudkowsky and his ilk are cranks.
That Yud is the Neil Breen of AI is the best thing ever written about rationalism in a youtube comment.


this seems counterintuitive but… comments are the best, name of the function but longer are the worst. Plain text summary of a huge chunk of code that I really should have taken the time to break up instead of writing a novella about it are somewhere in the middle.
I feel a lot of bad comment practices are downstream of javascript relying on jsdoc to act like a real language.
No idea if it was intentional given how long a series’ production cycle can be before it ends up on tv/streaming, but it’s hard not to see Vince Gilligan’s Pluribus as a weird extended impact-of-chatbots metaphor.
It’s also somewhat tedious and seems to be working under the assumption that cool cinematography is a sufficient substitute for character development.