• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    Not a lot? The quirk is they’ve hyper specialized nodes around AI.

    The GPU boxes are useful for some other things, but they will be massively oversupplied, and they mostly aren’t networked like supercomputer clusters.

    Scientists will love the cheap CUDA compute though. I am looking forward to a hardware crash.

    • misk@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 days ago

      That’s what I figured but was open to hearing how data centers won’t go bankrupt when current VC / investor money stops propping up AI arms race. I’m not even sure lots existing hardware won’t go to waste because there’s seemingly not enough power infrastructure to feed it and big tech corpos are building nuclear reactors (on top of restarting coal power plants…). Those reactors might be another silver lining however similar to cheap compute becoming available for scientific applications.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        because there’s seemingly not enough power infrastructure

        This is overblown. I mean, if you estimate TSMC’s entire capacity and assume every data center GPU they make is full TDP 100% of the time (which is not true), the net consumption isn’t that high. The local power/cooling infrastructure things are more about corpo cost cutting.

        Altman’s preaching that power use will be exponential is a lie that’s already crumbling.

        But there is absolutely precedent for underused hardware flooding the used markets, or getting cheap on cloud providers. Honestly this would be incredible for the local inference community, as it would give tinkerers (like me) actually affordable access to experiment with.