>Because AGI is still some years away
For years now, proponents have insisted that AI would improve at an exponential rate. I think we can now say for sure that this was incorrect.
> For years now, proponents have insisted that AI would improve at an exponential rate.
Did they? The scaling "laws" seem at best logarithmic: double the training data or model size for each additional unit of... "intelligence?"
We're well past the point of believing in creating a Machine God and asking Him for money. LLMs are good at some easily verifiable tasks like coding to a test suite, and can also be used as a sort-of search engine. The former is a useful new product; the latter is just another surface for ads.
The original AGI timeline was 2027-2028, ads are an admission that the timeline is further out.