I find it a bit odd that people are acting like this stuff is an abject failure because it's not perfect yet.
Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.
Yes, people have probably been deploying it in spots where it's not quite ready but it's myopic to act like it's "not going all that well" when it's pretty clear that it actually is going pretty well, just that we need to work out the kinks. New technology is always buggy for awhile, and eventually it becomes boring.
We implement pretty cool workflows at work using "GenAI" and the users of our software are really appreciative. It's like saying a hammer sucks because it breaks most things you hit with it.
> Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.
I think the big problem is that the pace of improvement was UNBELIEVABLE for about 4 years, and it appears to have plateaued to almost nothing.
ChatGPT has barely improved in, what, 6 months or so.
They are driving costs down incredibly, which is not nothing.
But, here's the thing, they're not cutting costs because they have to. Google has deep enough pockets.
They're cutting costs because - at least with the current known paradigm - the cost is not worth it to make material improvements.
So unless there's a paradigm shift, we're not seeing MASSIVE improvements in output like we did in the previous years.
You could see costs go down to 1/100th over 3 years, seriously.
But they need to make money, so it's possible non of that will be passed on.
>and is likely to keep improving.
I'm not trying to be pedantic, but how did you arrive at 'keep improving' as a conclusion? Nobody is really sure how this stuff actually works. That's why AI safety was such a big deal a few years ago.
Because the likes of Altman have set short term expectations unrealistically high.
> Generative AI, as we know it, has only existed ~5-6 years, and it has improved substantially, and is likely to keep improving.
Every 2/3 months we're hearing there's a new model that just blows the last one out of the water for coding. Meanwhile, here I am with Opus and Sonnet for $20/mo and it's regularly failing at basic tasks, antigravity getting stuck in loops and burning credits. We're talking "copy basic examples and don't hallucinate APIs" here, not deep complicated system design topics.
It can one shot a web frontend, just like v0 could in 2023. But that's still about all I've seen it work on.