The models themselves represent the biggest deflation cases I've ever seem.
The charged cost of a frontier model is ~200x lower than 2 years ago, and the ones we are using now are much better - although measuring that and how much is challenging. Building a "better than GPT-4" model is also vastly cheaper than building GPT-4 was... perhaps 1/100th?
The is a great point that wasn't included in the original article. Thank you.