You have to convince investors why your AI research won’t hit a wall like LLMs are now - they’ve poisoned the term “AI”
They’re a dead end, insofar as they do all they’ll ever be able to; if you can find use for them at their current level, great, but it does not look likely they will be able to do more than they currently can
I dunno. Investors are still lined up to invest in AI startups from what I hear. But that isn’t much evidence.
That said, individually, llms may have hit a wall. But there is plentynof room to optimize them, and lots of ways to combine them. Their uses are still i their infancy. Like take grafana. It doesn’t support personal api keys. So I can’t give the AI access to test and iterate on solutions yet. Lots of software is like that. The llm doesn’t need to change. The software we use needs to support it. First with access, then with guardrails like fine grained access controls so we can trust that the ai can’t do things we don’t want it to. Then we can really experiment to found out what it can do.
And really, the answer to getting more out of AI is parralelism. So as they optimize it to make it less expensive, we will be able to use parralelism to get more out of it, without fundamentally changing it.
There is a lot of room to grow the uses of the current AIs while we wait for some totally new approach to come along and get us to AGI. We aren’t ready for that now anyway. In 15 or 20 years, maybe we will be.
You have to convince investors why your AI research won’t hit a wall like LLMs are now - they’ve poisoned the term “AI”
They’re a dead end, insofar as they do all they’ll ever be able to; if you can find use for them at their current level, great, but it does not look likely they will be able to do more than they currently can
I dunno. Investors are still lined up to invest in AI startups from what I hear. But that isn’t much evidence.
That said, individually, llms may have hit a wall. But there is plentynof room to optimize them, and lots of ways to combine them. Their uses are still i their infancy. Like take grafana. It doesn’t support personal api keys. So I can’t give the AI access to test and iterate on solutions yet. Lots of software is like that. The llm doesn’t need to change. The software we use needs to support it. First with access, then with guardrails like fine grained access controls so we can trust that the ai can’t do things we don’t want it to. Then we can really experiment to found out what it can do.
And really, the answer to getting more out of AI is parralelism. So as they optimize it to make it less expensive, we will be able to use parralelism to get more out of it, without fundamentally changing it.
There is a lot of room to grow the uses of the current AIs while we wait for some totally new approach to come along and get us to AGI. We aren’t ready for that now anyway. In 15 or 20 years, maybe we will be.