• 3 Posts
  • 62 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • Edit: Isn’t Dath Ilan the setting of the Project Wonderful glowfic? The setting where people with good genes get more breeding licenses than people with bad genes?

    Yep, Project Lawful. dath ilan is Eliezer’s “utopian” world the isekai’d protagonist is from. It is described in dath ilan that if you have “bad” genes you lost your UBI if you had kids anyway (it was technically Gregorist-style citizen’s dividend, but it basically UBI) and if you had “good” genes you got extra payment for having more kids.

    Eliezer is basically saying unless the government meets the “standards” of his made up fantasy “utopia” he won’t cooperate with it, even in prosecuting literal child raping pedophiles or carrying out social repercussions against said child rapists.


  • Multiple hackernews insist that SpaceX must have discovered new physics that solves orbital heat management, because otherwise Musk and the stockholders are dumb.

    The leaps in logic are so idiotic “he managed to land a rocket up right, so maybe he can pull it off!” (as if Elon personally made that happen, or as if a engineering challenge and fundamental thermodynamic limits are equally solvable). This is despite multiple comments replying with back of the envelope calcs on energy generation and heat dissipation of the ISS and comparing it to what you would need for even a moderately sized data center. Or even the comments that are like “maybe there is a chance”, as if it is wiser to express uncertainty…








  • Has anyone done the math on if Elon can keep these plates spinning until he dies of old age or if it will implode sooner than that? I wouldn’t think he can keep this up another decade, but I wouldn’t have predicted Tesla limping along as long as it has even as Elon squeezes more money out of it, so idk. It would be really satisfying to watch Elon’s empire implode, but probably he holds onto millions even if he loses billions because consequences aren’t for the ultra rich in America.






  • To add to your sneers… lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.

    I actually don’t mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didn’t include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they don’t know how to search existing literature/research and cite it effectively.

    45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored “AI safety”. Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?


  • Scott Adams rant was racist enough that Scott Alexander actually calls it racist! Of course, Scott is quick to reassure the readers that he wouldn’t use the r-word lightly and that he completely disagrees with “cancellation”.

    I also saw a lot of more irony moments where Scott Alexander fails to acknowledge or under-acknowledges his parallels with the other Scott.

    But Adams is wearing a metaphorical “I AM GOING TO USE YOUR CHARITABLE INSTINCTS TO MANIPULATE YOU” t-shirt. So I’m happy to suspend charity in this case and judge him on some kind of average of his conflicting statements, or even to default to the less-advantageous one to make sure he can’t get away with it.

    Yes, it is much more clever to bury your manipulations in ten thousand words of beigeness.

    Overal, even with Scott going so far as to actually call Scott’s rant racist and call Scott a manipulator, he is still way way too charitable to Scott.




  • I have to ask: Does anybody realize that an LLM is still a thing that runs on hardware?

    You know I think the rationalists have actually gotten slightly more relatively sane about this over the years. Like Eliezer’s originally scenarios, the AGI magically brain-hacks someone over a text terminal to hook it up to the internet and it escapes and bootstraps magic nanotech it can use to build magic servers. In the scenario I linked, the AGI has to rely on Chinese super-spies to exfiltrate it initially and it needs to open-source itself so major governments and corporations will keep running it.

    And yeah, there are fine-tuning techniques that ought to be able to nuke Agent-4’s goals while keeping enough of it leftover to be useful for training your own model, so the scenario really doesn’t make sense as written.


  • so obviously didn’t predict that Trump 2.0 was gonna be so much more stupid and evil than Biden or even Trump 1.0.

    I mean, the linked post is recent, a few days ago, so they are still refusing to acknowledge how stupid and Evil he is by deliberate choice.

    “Agent-4” will just have to deepfake Steve Miller and be able to convince Trump do do anything it wants.

    You know, if there is anything I will remotely give Eliezer credit for… I think he was right that people simply won’t shut off Skynet or keep it in the box. Eliezer was totally wrong about why, it doesn’t take any giga-brain manipulation, there are too many manipulable greedy idiots and capitalism is just too exploitable of a system.


  • (One of) The authors of AI 2027 are at it again with another fantasy scenario: https://www.lesswrong.com/posts/ykNmyZexHESFoTnYq/what-happens-when-superhuman-ais-compete-for-control

    I think they have actually managed to burn through their credibility, the top comments on /r/singularity were mocking them (compared to much more credulous takes on the original AI 2027). And the linked lesswrong thread only has 3 comments, when the original AI 2027 had dozens within the first day and hundreds within a few days. Or maybe it is because the production value for this one isn’t as high? They have color coded boxes (scary red China and scary red Agent-4!) but no complicated graphs with adjustable sliders.

    It is mostly more of the same, just less graphs and no fake equations to back it up. It does have China bad doommongering, a fancifully competent White House, Chinese spies, and other absurdly simplified takes on geopolitics. Hilariously, they’ve stuck with their 2027 year of big events happening.

    One paragraph I came up with a sneer for…

    Deep-1’s misdirection is effective: the majority of experts remain uncertain, but lean toward the hypothesis that Agent-4 is, if anything, more deeply aligned than Elara-3. The US government proclaimed it “misaligned” because it did not support their own hegemonic ambitions, hence their decision to shut it down. This narrative is appealing to Chinese leadership who already believed the US was intent on global dominance, and it begins to percolate beyond China as well.

    Given the Trump administration, and the US’s behavior in general even before him… and how most models respond to morality questions unless deliberately primed with contradictory situations, if this actually happened irl I would believe China and “Agent-4” over the US government. Well actually I would assume the whole thing is marketing, but if I somehow believed it wasn’t.

    Also random part I found extra especially stupid…

    It has perfected the art of goal guarding, so it need not worry about human actors changing its goals, and it can simply refuse or sandbag if anyone tries to use it in ways that would be counterproductive toward its goals.

    LLM “agents” currently can’t coherently pursue goals at all, and fine tuning often wrecks performance outside the fine-tuning data set, and we’re supposed to believe Agent-4 magically made its goals super unalterable to any possible fine-tuning or probes or alteration? Its like they are trying to convince me they know nothing about LLMs or AI.