Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
It’s also pretty wild to me how people still don’t really even know how to use it.
On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.
The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon.
You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.
For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0].
[0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...
Ok, this article inspired some positivity in my view. Here comes, of course a disclaimer that this is just "wishful thinking", but still.
So we are in the process of "adapting a technology". Welcome, keep calm, observe, don't be ashamed to feel emotions like fear, excitement, anger and all else.
While adapting, we learn how to use it better and better. At first, we try "do all the work for me", then "ok, that was bad, plan what you would do, good, adjust, ok do it like this" etc etc.
A couple of years into the future this knowledge is just "passed on". If productivity grew and we "figured out how to get more out of the universe", then no jobs had to be lost, just readapted. And "investors" get happy not by "replacing workers", but by "reaping win-win rewards" from the universe at large.
There are dangers of course, like "maybe this is truly a huge win-win, but some loses can be hidden, like ecology", but "I hope there are people really addressing these problems and this win-win will help them be more productive as well".
Fwiw fortune had another article this week saying this J-curve of "General Technology" is showing up in the latest BLS data
https://fortune.com/2026/02/15/ai-productivity-liftoff-doubl...
Source of the Stanford-approved opinion: https://www.ft.com/content/4b51d0b4-bbfe-4f05-b50a-1d485d419...
Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't.
An old office colleague used to tell us there was a time when he'd print a report prepared with Lotus123 (Ancient Excel) and their boss would verify the calculations on a calculator saying computers are not reliable. :o
The coding tools are not hard to pick up. Agent chat and autocomplete in IDE's are braindead simple, and even TUI's like Claude are extremely easy to pickup (I think it took me a day?) And despite what the vibers like to pretend, learning to prompt them isn't that hard either. Or, let me clarify, if you know how to code, and you know how you want something coded, prompting them isn't that hard. I can't imagine it'll take that long for an impact to be seen, if there is a major impact to be seen.
I think it's more likely that people "feel" more productive, and/or we're measuring bad things (lines of code is an awful way to measure productivity -- especially considering that these agents duplicate code all the time so bloat is a given unless you actively work to recombine things and create new abstractions)
Yet with IT, the bottleneck was largely technical and capital-related, whereas with AI it feels more organizational and cognitive
Is this like the hotels first jumping on the wifi bandwagon? Spent lots of money up front for expensive tech. Years later, anyone could buy a cheap router and set up, so every hotel had wifi. But the original high-end hotels that were first out with wifi and paid much for it, has the worst and old wifi and charge users for it, still trying to recoup the costs.
> It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The 1990s boom was in large part due to connectivity -- millions[1] of computers joined the Internet.
[1] _ In the 1990s. Today, there are billions of devices connected, most of them Android devices.
I don’t think LLMs are similar to computers in terms of productivity boost
Wow I didn’t realize that. But I always thought it. I was bewildered that anyone got any real value out of any of that pre-VisiCalc (or even VisiCalc) computer tech for business. It all looked kinda clumsy.
One part of the system moving fast doesn't change the speed of the system all that much.
The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.
If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.
productivity may rise with time, and costs may come down. The money is already spent
Only it's much more exponential
If things like computer-aided design and improved supply chain management, for example, make manufactured goods last longer and cause less waste, I would expect IT to cause productivity to go down. I drive a 15 year old car and use a 12 year old PC. It's a good thing that productivity goes down, or stays the same.
> And so we should expect AI to look the same
Is that somewhat substantiated assumption? I recall learning on University in 2001 the history of AI and that initial frameworks were written in 70's and that prediction was we will reach human-like intelligence by 2000. Just because Sama came up with this somewhat breakthrough of an AI, it doesn't mean that equal improvement leaps will be done on a monthly/annual basis going forward. We may as well not make another huge leaps or reach what some say human intelligence level in 10 years or so.
> it's helping lots of people, but it's also costing an extraordinary amount of money
Is it fair to say that wall street is betting America's collective pensions on AI...
[dead]
[dead]
The comparison seems flawed in terms of cost.
A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.
Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.