No one is looking at this issue correctly. Saying out of the AI "talent war" is a smart move. AI is due to collapse under its own weight.
1) High-quality training data is effectively exhausted. The next 10× scale model would need 10× more tokens than exist.
2) The Chinchilla rule. Hardware gets 2× cheaper every 18 mo, but model budgets rise 4× in that span. Every flagship LLM therefore costs 2× more than the last, while knock-off models appear years later for pennies. Benchmark gains shrink and regulation piles on. Net result: each new dollar on the next big LLM now buys far less payoff. The "wait-and-copy" option is getting cheaper every day.
I agree but it doesn’t seem to be intentional on their part.
I usually do not agree with the Amazon leadership (well, recently they haven't been "Right A lot"!)
But I agree with the following statement Matt Garman gave recently;
It's because AI usually creates slop, without review these "slop" build up. We don't have infinite context window to solve the slop anyway. (even if we do, the context-rot has been confirmed)Also, on average, Indian non-Tech employees who manages thousands of spreadsheets or manually manages your in-store cameras are much more cheaper than the "tokens" and the NVIDIA GPUs you can throw at the problem, at least for now and a foreseeable future.