I always hear the ai companies clamoring for gigawatts of “compute” so they can finally “grow” due to the immense “demand”. But somehow people can just start up clawbot and burn through millions of tokens just fine, I never hear about anyone being denied access to LLM usage. The same for businesses, they’re being sold ai crap left and right and there is never a bottleneck or a queue. In fact, there seems to be plenty of “compute” to go around, far more than needed, really.
Has this ever been pointed out to the ai CEOs? Has this been discussed or explained?


I believe the heaviest demand is not in executing the AI models (these can run on a midrange desktop right now), but in training and re-training the models constantly with fresh new content stolen from us meatbags.