So seeing the reaction on lesswrong to Eliezer’s book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren’t as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer’s AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven’t changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn’t give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that “it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring” is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer’s current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren’t the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn’t give them too much credit, it sounds like they don’t understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer’s focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

  • lurker@awful.systems
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    25 days ago

    feels like a good enough place to dump my other observations of this book’s reviews

    -It’s currently sitting at a 3.99 on Goodreads, with 4K+ ratings and 757 reviews

    -higher on Amazon with a 4.5, though less reviews, only 313 (i couldve sworn it was 800 earlier but whatever)

    -it received several high profile endorsements, all listed on the wikipedia page. only 7 of these endorsements work in the compsci field, and only one of them’s an AI expert (Yoshua Bengio)

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    This comment is gold:

    I particularly agree with the point about the style being much more science-y than I’d expected, in a way that surely filters out large swathes of people. I’m assuming “people who are completely clueless about science and are unable to follow technical arguments” are just not the target audience. To crudely oversimplify, I think the target audience is 120+ IQ people, not 100 IQ people.

    I haven’t read the damn book and I never will, but I have a hard time imagining there’s any modern science that can’t be explained to 100IQ smoothbrains, assuming the author is good enough.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      5 months ago

      To be fair, you have to have a very high IQ to understand Rick and Morty If Anyone Builds It, Everyone Dies. The humor is extremely subtle, and without a solid grasp of theoretical physics most of the jokes will go over a typical viewer’s head. (I’m doing a variant of this meme)

      • scruiser@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 months ago

        There’s also Eliezer’s nihilistic outlook, which is deftly woven into his parables-- his personal philosophy draws heavily from Godel Escher Bach, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of his parables, to realize that they’re not just entertaining- they say something deep about the nature of Intelligence. As a consequence people who dislike IABIED truly ARE idiots- of course they wouldn’t appreciate, for instance, the motivation in Eliezier’s existencial catchphrase “Tsuyoku Naritai!,” which itself is a cryptic reference to Japanese culture. I’m smirking right now just imagining one of those addlepated simpletons scratching their heads in confusion as Nate Soares genius unfolds itself on their copy of IABIED. What fools… how I pity them. 😂 And yes by the way, I DO have a rationalist tattoo. And no, you cannot see it. It’s for the math pet’s eyes only- And even they have to demonstrate that they’re within 5 IQ points of my own (preferably lower) beforehand.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      I have a hard time imagining there’s any modern science that can’t be explained to 100IQ smoothbrains, assuming the author is good enough.

      Same here. The main things stopping the LWers are that

      (a) what they’re doing is utterly divorced from modern science

      (b) they are godawful writers, to the point where it took years of billionaire funding and an all-consuming economic bubble to break them into the mainstream

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I’m still not sure if they actually grasp the totalitarian implications of going ham on tech companies and research this way. He sure doesn’t get called out about his ‘solutions’ that imply that some sort of world government has to happen that will also crown him Grand Central Planner of All Technology.

    It’s possible they just believe the eight [specific consumer electronic goods] per household is doable, and at worst equally authoritarian with the tenured elites snubbing their noses at HBD research.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 months ago

      You thought a right-and-proper Communist Five-Year Plan couldn’t also be a self-insert fanfic? Hold Yud’s beer

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        did you hear about Yudoslavia? They have this Eight GPU policy, if a household is about to have another GPU and they find it can’t run Crysis at max settings at 60fps, they leave it outside for the wolves. But this might just be Yudophobic propaganda

        • scruiser@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          In Eliezer’s “utopian” worldbuilding fiction concept, dath ilan, they erased their entire history just to cover up the any mention of any concept that might inspire someone to think of “superintelligence” (and as an added bonus purge other wrong-think concepts). The Philosopher Kings Keepers have also discouraged investment and improvement in computers (because somehow, despite now holding any direct power and the massive financial incentives and dath ilan being described as capitalist and libertarian, the Keepers can just sort of say their internal secret prediction market predicts bad vibes from improving computers too much and everyone falls in line). According to several worldbuiding posts, dath ilan has built an entire secret city that gets funded with 2% of the entire world’s GDP to solve AI safety in utter secrecy.