- machine learning models will continue to improve their output somewhat but gains will be incremental and the intrinsic problems with ml-derived content (e.g hallucinations, context window limitations, long-term coherency) will remain
- open source models will catch up with commercial ones
- the smaller ml companies (like openai and anthropic) will be absorbed, probably by Microsoft and Amazon
- The increasing cost of hardware and energy will force companies to raise prices for ml subscriptions and eventually lock ml features behind paywalls
- Computer parts will remain expensive for a long time
- Programmers will collectively spend the next decade wrestling with the consequences of filling their codebases with millions of lines of ai generated code
- Google images will never fully recover
You forgot the most important part: it will be infected with ads. Asked about what the best dish soap is? Why it obviously is either Fairy or Dawn, depending on which brand paid more that week.
I think it’ll end up like any industry with machine made options. There will be the a spectrum of products from 100% human made to majority machine made.
There will be a few bespoke artists doing interesting things for the wealthy and the passionate. But, for most of society, the mass produced stuff is fine.
Take clothes, how many of yours were hand made VS machine made. Cobblers are hand making shoes the world over, we’re yours hand made. I have some hand knitted wool stuff (because I’m passionate about wool) but my Levi’s are machine made. Shoes, motorcycle gear…
Furniture. There’s cabinet makers the world over doing beautiful pieces of work, but I got most of my stuff from IKEA. How about you?
It’ll end up like any other industry with machine made options. The bubble will burst, don’t get me wrong, but after the .com bubble burst we still had the internet.
One of the top posts of fuckai right now is a bottle of olive oil, now I’m not yucking their yum. I just have different things I wanna do with my day than stare at someones olive oil bottle. Not better, I’m glad they have the free time and mental effort to do that, pondering mass produced labels is their jam, I support it. I just wanna do different things. I expect the world is going to want to do different things too.
LLMs are a dead end, and the massive amounts of money being wasted on them will make people too scared to invest in other forms of AI.
So we are currently at a local maxima that we won’t overcome in 10 years. It will take much longer before we try a different approach to create “AGI,” and the wasted money on LLMs will slow other forms of AI research, leaving us stagnating for >10 years
I think that all depends on what else there is to invest in. In general, as terrible as ai is, it’s carrying the stock market. Investors need something else to turn to to divert away from AI.
I’m not convinced that investors would know the difference between a company trying to improve llms vs taking a new approach. So I don’t think it will stifle investment in other forms of AI research.
I also don’t think they are a dead end overall. They sure aren’t likely to get to agi, but you don’t need agi to be useful.
You have to convince investors why your AI research won’t hit a wall like LLMs are now - they’ve poisoned the term “AI”
They’re a dead end, insofar as they do all they’ll ever be able to; if you can find use for them at their current level, great, but it does not look likely they will be able to do more than they currently can
I dunno. Investors are still lined up to invest in AI startups from what I hear. But that isn’t much evidence.
That said, individually, llms may have hit a wall. But there is plentynof room to optimize them, and lots of ways to combine them. Their uses are still i their infancy. Like take grafana. It doesn’t support personal api keys. So I can’t give the AI access to test and iterate on solutions yet. Lots of software is like that. The llm doesn’t need to change. The software we use needs to support it. First with access, then with guardrails like fine grained access controls so we can trust that the ai can’t do things we don’t want it to. Then we can really experiment to found out what it can do.
And really, the answer to getting more out of AI is parralelism. So as they optimize it to make it less expensive, we will be able to use parralelism to get more out of it, without fundamentally changing it.
There is a lot of room to grow the uses of the current AIs while we wait for some totally new approach to come along and get us to AGI. We aren’t ready for that now anyway. In 15 or 20 years, maybe we will be.
That we will all be forced to adopt it whether we like it or not, in scummy ways, and those that don’t will be unrightfully seen as “boomers”, when in reality they are the last people to genuinely do their work with love and care.
Narrow AI well get better, even faster than normal because of the research that big AI companies are doing now, but attempts at more general AI will stop being profitable.
General “AI” is not profitable at all, even rn. Raising money is not making profit
Improvement stagnates.
Venture capital availability reduces.
Mag 7 try to monetise to continue development.
Business adoption is tepid as long term heavy use reduces skills and productivity.
Some financial VC fund learns from a credible whistle blower that generative AI is not a pathway to AGI. Revalues their portfolio, enters administration.
The ensuing fallout triggers a global depression.
Bubble will burst, many AI companies will go under, the ones that remain will have to price themselves out of reach of most people. Lack of investor confidence will trigger a third AI winter, which will affect even actual valuable uses of machine learning models and the further development of locally-run models. People who graduated college between 2023 and 202X will have a harder time getting a job. AGI will still be a far-off dream.
The only AI companies that will exist in 10 years will be those started by a large company that has other unrelated profit streams. Such as Microsoft, Google, Amazon etc. All others will fail. Some will be bought by the big players if they develop a unique technology. Otherwise they all go broke.
If I had to guess, there will be only two major AI/LLM companies in existence. The nature of LLM’s discounts that small companies and organizations can scale one to be profitable.
Micron comes back to the consumer market, but has to rebrand due to the ire of consumers for them being assholes. Same with Western Digital, although they have not “technically” left the consumer market.
The next 5 years will be spent by people trying to find SOMETHING for AI to do. Some very high end uses in research, or academics will be found. However, those will cost massive amounts of money and only available to governments, large corporations and academic institutions. Consumers will be left with creating images, music and a few other parlor tricks, but there will be nothing of any true value offered. In the mean time AI images and videos will be used to exacerbate the societal/ cultural issues across the globe, until the population becomes so jaded and cynical that this media looses efficacy. By that time enormous damage will have been done.
Consumers will also be left paying for the electricity, water, and other resources that the remaining data centers will consume.
I’m currently looking heavily into installing solar on my home, with a battery backup just because of these stupid data centers. It’s just a matter of time that these things start causing issues on the grid.
deleted by creator
LLMs will be a standard part of software tooling like IDEs, and people won’t talk about them much anymore.
LLMs and image/video generation will be a standard part of adult entertainment.
Bubble go burst
Well, assuming some agi breakthrough doesn’t happen (which would in my opinion require a vastly different approach than llms). We will see more of this ai swarm type stuff. Essentially you end up with a bunch of specialized ai’s, and then some ai coordinators. The ai that we will talk to will just farm out the work to other AIs that will include ones specialized in verifying the work that the ai does.
Most people preAI did work that was say 60% implementation, 30% figuring out what needs to be done, and 10% verifying what was done. That will shift to 15% implementation, 50% requirements gathering, and 35% verification.
Obviously those number are just to show the shift, not intended to be an accurate representation of the current way our work is divided.Overall, if you give ai a way to verify what it is doing, and let it iterate, it is far more useful than just telling it to do a thing or asking it a question.
Even if llm’s are a dead end, the amount of money going into creating AI dramatically pushes forward the date of its existence, we just emulated a fly brain, they’re doing a mouse next. This is a somewhat undeniable path to real intelligence, it will happen soon, even if the compute needed for a human brain turns out to be immense, the fly intelligence basically proved all you need is the connectome, so uh, yeah. If someone figures out how to automate the work that was done on the fly on a larger scale we’ll be there real quick.
LLMs will go the way of NTFs. No AGI will exist yet
People hate LLMs because of their unreliability, and they are right. But AI is a much more vast field.
As soon as we have more reliable, causal and general intelligence, the opinions will change.
I personally believe that humans have no clue how limited our brain power is. So much so that there will be no AGIs. Only ASIs. Same thing that happened with chess bots.







