I think analyses like these are motivated reasoning. In 2000, I'm sure you could have said that after infrastructure costs the internet, and the web, added "basically zero" to US economic growth. And there were people saying that!
Someone I deeply respect, Clifford Stollm wrote a book called “Silicon Snake Oil — Second thoughts on the information highway" in 1995. And while he was and is a brilliant person, Stoll was wrong.
Smart people are terrible at predicting the most consequential changes in our future – even when they're familiar with the technology. I wrote a bit about my thesis why here, https://1517.substack.com/p/inside-v-outside-context-problem...
Don't make his mistake. Don't look away from the change being wrought. The world has changed and our history now has a new, sharp dividing chapter "Before ChatGPT | After ChatGPT"
and that chapter will go down right next to "Before Trinity | After Trinity"; "Before PC | After PC"; "Before 'Internet' | After 'Internet'"†
† Yes, I know I'm referring to the Web. But we're still using the dark fiber from the .com boom.
I guess this is trend now because it's a contrarian / attention grabbing headline. See:
- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...
- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...
But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox
> The term can refer to the more general disconnect between powerful computer technologies and weak productivity growthI think a pretty good example I had at work, we had the option to buy a software package from a 3rd party company. After reviewing the specs we needed, I told my manager to give me a few hours to see if I could produce what we needed with AI instead. Lo and behold, I was able to do it in just a few hours, AI package was tested, integrated, and we moved on. No where was any of that recorded that I just saved the company lots of money using AI. I bet there are lots of examples like this that just aren't adequately tracked at both micro and macro levels. For some reason we expected to to be able to see these huge gains from AI but we never bothered putting systems in place to observe them.
This article seems to have "basically zero" content.
Today you have to be blind to not see the change that is coming.
World has its own (massive) inertia, burocracy present in businesses accounting for a big part in it.
AI itself is moving fast but not at infinite speeds. We start to have good enough tooling but it's not yet available to everyone and it still hangs on too many hacks that will need to crystalize. People have a lot of mess to sort out in their projects to start taking full advantage of AI tooling - in general everybody has to do bottom up cleanup and documentation of all their projects, setup skills and whatnot and that's assuming their corp is ok with it, not blocking it and "using ai" doesn't mean that "you can copy paste code to/from copilot 365".
As people say - something changed around Dec/Jan. We're only now going to start seeing noticable changes and changes themselves will start speeding up as well. But it all takes time.
Why do I have a feeling that this will be ignored as biased by the people who need to read it the most.
I have an alternative explanation: for the areas where AI is giving employees serious productivity gains, they're working for 20 minutes, playing wordle/resting/relaxing for 7 hours, 40 minutes, and delivering exactly as much as they were before.
Andrej Karpathy said that major revolutions like the Internet, smartphones, and AI often don’t show up clearly in GDP statistics, even when they radically change how people work. GDP measures total spending, not productivity or usefulness. These revolutions improved efficiency and quality of life, but GDP mostly continued along its long-term trend.
See his interview in Dwarkesh's podcast: https://www.youtube.com/watch?v=c0-0gGdDJyE&t=4983s
there literally slew of companies who went in 1 year from mid size business to multi billion ARR.
And yeah, blah blah they burn money blah blah. Check Anthropic CEO interviews. He openly describe the balance problem : - cost of training a new model - newly built infra ratio of training vs inference - market adoption, that is despite extremely quick is not unlimited, since even market is not unlimited.
essentially it's a tricky balance, between you do not invest today you will loose tomorrow vs you invest too much and go bankrupt next year.
the measurement problem here is real. GDP captures output, not latent capacity or quality. an ops team that responds to 200 requests/week with AI at 2x speed doesn't show up in GDP if headcount stays flat. the value is captured in retention, fewer escalations, faster revenue ops cycles -- none of which hit a GDP line directly. the reason AI added zero isn't that it didn't work. it's that we're measuring the wrong thing.
After using OpenClaw for 1 week, I'm so extremely bullish.
Buy buy buy buy.
We don't even have enough data centers.
Trickle down effect reversal: > “A lot of the AI investment that we’re seeing in the U.S. adds to Taiwanese GDP, and it adds to Korean GDP but not really that much to U.S. GDP”
Bottom line, no one's buying your vibeslop when they can create and maintain their own for their custom needs. And if we're not buying each others vibeslop there's no productivity to be measured in the economy.
With all this recent Claw stuff, it's weird that as people who should be championing the opposite due to our field of study or industry, some of us are now pushing a method of automation that is akin to robo vaccums randomly tracking dogshit across the carpet.
In my working environment, people get dressed down for repeatedly communicating incorrect information. If they do it repeatedly in an automated fashion they will be publically shamed if they are senior enough.
I have no idea what benefit a human-in-loop for sending an automatically generated emails or agent generated sdks or buliding blocks has when there is no guarentee or even a probability of correctness attached to the result. The effort for vaildating and editing a generated email can be equally or greater than manually writing a regular email let alone one of certain complexity or significance.
And what do we do to create to try to guarentee a semblance of correctness? We add another layer of automated validation performed by, you guessed it, the same crew of wacky fuzzy operators that can inject correct sounding gibberish or business workflows at any moment.
It's almost like trying to build a house of cards faster than the speed with which it is collapsing. There seems to be a morbid fascination among even the best of us with how far things can be taken until this way forward leads to some indisputable catastrophe.
The cover image is just too good. It's just way too good.
There really need to be better metrics about the state of an economy than GDP.
Article refutes itself by saying it's difficult to measure impact on GDP (thus would by this logic have to take a neutral stance on impact of AI)
I think this is key:
"On top of that, there is currently no reliable way to accurately measure how AI use among businesses and consumers contributes to economic growth."
No doubt people are using it work ( https://www.gallup.com/workplace/701195/frequent-workplace-c... ) the question is how much productivity results and to whom does it accrue.
Partially this is AI capability (both today and in the past), partially this is people taking time to change their tools.
I think it’s still a bit too early to draw the conclusion.
We need to get past the hype first and let the cash grabbers crash.
After that, with a clear mind we can finally think about engineering this technology in a sane and useful way.
The most interesting thing about this is that the underlying economy is actually stronger than people realize. The narrative has been that AI data center construction was propping up an otherwise weak economy. If this analysis is true, then it wasn't being propped up by data center construction. The strength was usual and normal strength.
I have no doubt that people will use this to axe grind about they think AI is dumb in general, but I feel like that misses the point that this is mostly about data center construction contributing to GDP.
This is an abbreviated version of a far more nuanced WaPo article:
https://www.washingtonpost.com/technology/2026/02/23/ai-econ...
The AI bros are saying everyone will be out of work in 5 years.
Economists and businesses are calling BS and saying AI is cool, but basically adding zero measurable value with 95% of AI projects failing.
The truth is likely somewhere in the middle, but it seems unlikely this bubble can continue much longer.
Yet the job situation for software developers in the United States is borderline terminal. Interesting.
I'm sure we can find stories from the 1980s and 1990s about how the "world wide web" hasn't increased the GDP at all.
Anyone want to speculate on the Post-AI Bubble world?
When companies can no longer afford to just keep running AI data centers at a loss, we will suddenly have a lot more data centers than we need, who will benefit from these? Who could have use for the hardware for other purposes?
Note last year. The vibes coming from the Claude dungeons tell a different story. Just in the last six weeks. We are on the precipice.
I mean for me I compare AI today with the introduction of the Apple 2. It promises a lot and can do some awesome things but we are still at the beginning. Also I am amazed how quickly people just got used to AI. Its still magical and 5 years ago this was science fiction that people did not think was possible.
[dead]
[dead]
I completely agree. If AI can't do 100% of a job then you can't remove the job.
And most jobs that can be automated already has been automated using traditional software.
I’ll do the Minority Report here: I loved the article, the point being that rich people hyping AI for their own enrichment have somewhat shutdown rational arguments of benefits vs. costs, the costs being: energy use, environmental impact of using environmentally unfriendly energy sources out of desperation, water pollution from by products of electronics production and recycling and from water use in data centers, diverting money from infrastructure and social programs, putting more debt stress on society, etc.
I have been a paid AI practitioner since 1982, so I appreciate the benefits of AI - it is just that I hate the almost religious tech belief that real AI will happen from exponential cost increases for LLM training and inference for essentially linear gains.
I get that some lazy ass people have turned vibe coding and development into what I consider an activity sort-of like mindlessly scrolling social media.