My experience has been
* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.
* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions
* when working alone, I see the biggest productivity boost in ai and where I can get things done.
* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.
* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
The Solow paradox is real, but there's another factor: most current AI tools are glorified autocomplete. You still have to prompt them, check their work, and integrate their output into your workflow.
The real productivity gains will come when AI runs autonomously - handling tasks without constant supervision. Think email triage that just happens, schedules that self-maintain, follow-ups that occur automatically.
Right now we're in the "AI as fancy search" phase. The jump to "AI as autonomous assistant" is where the productivity numbers will start showing up.
I think we are entering the phase where corporate is expecting more ROI than they are getting, but want to remain in the arms race.
The firmwide AI guru at my shop who sends out weekly usage metrics and release notes started mentioning cost only in the last few weeks. At first it was just about engaging with individual business heads on setting budgets / rules and slowing the cost growth rate.
A few weeks later and he is mentioning automated cost reporting, model downgrading and circuit breaking at a per-user level. The daily spend where you immediately get locked within 24 hours is pretty low.
I accept that AI-mediated productivity might not be what we expect to be.
But really, are CEO's the best people to assess productivity? What do they _actually_ use to measure it? Annual reviews? GTFO. Perhaps more importantly, it's not like anything a C-level says can ever be taken at face value when it involved their own business.
The slow part as a senior engineer has never been actually writing the code. It has been:
- reviews for code
- asking stakeholders opinions
- SDLC latency (things taking forever to test)
- tickets
- documentations/diagrams
- presentations
Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
Workers may see the LLM as a productivity boost because they can basically cheat a their homework.
As a CEO I see it as a massive clog up of vast amounts of content that somebody will need to check. A DDoS of any text-based system.
The other day I got a document of 155 pages in Whatsapp. Thanx. Same with pull requests. Who will check all this?
Original paper https://www.nber.org/system/files/working_papers/w34836/w348...
Figure A6 on page 45: Current and expected AI adoption by industry
Figure A11 on page 51: Realised and expected impacts of AI on employment by industry
Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry
These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)
A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.
Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..
Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.
And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.
There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.
It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.
Mirror:
Perhaps something went wrong along the career path of a developer? Personally during my education there is a severe lack of actual coding done mid lectures, especially any sort of showcase of tools that are available. We didn't even get taught how to use debuggers, I see late year students still struggle how to do basic navigation in a terminal.
And the biggest irony is that the "scariest" projects we had at our university ended up being maybe 500-1000 lines of code, things really must go back to hands on programming with real time feedback from a teacher. LLM's only output what you ask and won't really suggest concepts used by professionals unless you go out of your way to ask for it, it all seems like a vicious cycle even though meaningful code blocks can range along 5 to 100 lines which. When I use LLM's I just get information burn out trying to dig through all that info or code
Including 999 using Copilot.
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
There was a recent post where someone said AI allows them to start and finish projects. And I find that exactly true. AI agents are helpful for starting proof of concepts. And for doing finishing fixes to an established codebase. For a lot of the work in the middle, I can be still useful, but the developer is more important there.
It's the same reason why I have, for more than a decade, been so frustrated with people refusing to consider proper pair programming and even mob programming, as they view the need to keep people busy churning lines of code individually as the most important part of the company.
That multiple AI agents can now churn out those lines relatively nearly instantly, and yet project velocity does not go much faster, should start to make people aware that code generation is not actually the crucial cost in time taken to deliver software and projects.
I ranted recently that small mob teams with AI agents may be my view of ideal team setup: https://blog.flurdy.com/2026/02/mob-together-when-ai-joins-t...
At my current job I am in deep net LOC negative despite all new features... Somebody is getting fired and sued for stealing all these LOCs from the company...
If we assume people are somewhat rational (big ask I know), and the Efficient-market hypothesis, then we can estimate the value created by AI to be roughly equal to the revenue of these AI companies. That is: A professional who pays 20€/month likely believes that the AI product provides them with roughly 20€ each month in productivity gains, or else they wouldn't be paying, and similarly they would pay more for a bigger subscription if they thought there was more low hanging fruit available to grab.
Of course this doesn't take into account people who just pay to play around and learn, non professional use cases, or a few other things, but it's a rough ballpark estimate.
Assuming the above, current AI models would only increase the productivity for most workplaces by a relatively small amount, around 10-200 € per employee per month perhaps. Almost indistinguishable compared to salaries and other business expenses.
Maybe the CEOs do not realise that their workers are achieving great productivity, completing their tasks in 1 hour instead of 8, and spending time on the beach, rather than at their desks?
I think the best point made in this conversation is that AI is often enough used to do things quickly that have little value, or just waste people’s time.
I am glad to see articles like this that evaluate impact, but I wish the following would get more public interest:
With LLMs we are chasing sort-of linear growth in capability at exponential cost increases for power and compute.
Were you mad when the government bailed out mis-managed banks? The mother of all government bailouts might be using the US taxpayer to fund idiot companies like Anthropic and OpenAI that are spending $1000 in costs to earn $100.
I am starting to feel like the entire industry is lazy: we need fundamental new research in energy and compute efficient AI. I do love seeing non-LLM research efforts and more being done with much smaller task-focused models, but the overall approach we are taking in the USA is f$cking crazy. I fear we are going to lose big-time on this one.
I think the deluge on projects on show HN points to something real, its possible today to ship projects as a one man shop that looks like something that just a year or so would have required a team.
Personally I have noticed strange effects, where I previously would have reached for a software package to make something or solve an issue, its now often faster for me to write a specific program just for my use case. Just this weekend I needed a reel with a specific look to post on instagram but instead of trying to use something like after effects, i could quickly cobble together a program that was using css transforms that outputted a series of images I could tie together with ffmpeg.
About a month ago I was unhappy with the commercial ticketing systems, they were both expensive and opaque so I made my own. Obviously for a case like that you need discipline and testing when you take peoples money, so there was a lot of focus on end to end testing.
I have a few more examples like this, but to make this work you need to approach using LLMs with a certain amount of rigour. The hardest part is to prevent drift in the model. There are a certain number things you can do to make the model grounded in reality.
When the tool doesn’t have a reproducer, it’ll happily invent a story and you’ll debug the story. If you ground the root cause in for example a test, the model can get context enough to actually solve the problem.
Another issue is that you need to read and understand code quickly, but its no real difference from working with other developers. When tests are passing I usually do a PR to myself and then review as I usually would do.
A prerequisite is that you need tight specs, but those can also be generated if you are experienced enough. You need enough domain intuition to know what ‘done’ means and what to measure.
Personally I think the bottleneck will go from trying to get into a flow state to write solutions to analyze the problem space and verification.
Large firms are extremely bureaucratic organizations largely isolated from the market by their monopolistic positions. Internal pressures rule over external ones, and thus, inefficiency abounds. AI undeniably is a productive tool, but large companies aren't really primarily concerned with productivity.
I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data
> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.
good for 3 clicks: https://giftarticle.ft.com/giftarticle/actions/redeem/97861f...
Isn't it a bit early to draw such conclusions ? We are just getting started with AI use, especially in tech / engineering teams, and have only scratched the surface with regards to what is possible.
I am in strategy consulting and I can tell you the productivity gains are real in terms of research, model building, and summarising work. The result is price pressure from our clients.
The article suggests that AI-related productivity gains could follow a J-curve. An initial decline, as initially happened with IT, followed by an exponential surge. They admit this is heavily dependent on the real value AI provides.
However, there's another factor. The J-curve for IT happened in a different era. No matter when you jumped on the bandwagon, things just kept getting faster, easier, and cheaper. Moore's law was relentless. The exponential growth phase of the J-curve for AI, if there is one, is going to be heavily damped by the enshitification phase of the winning AI companies. They are currently incurring massive debt in order to gain an edge on their competition. Whatever companies are left standing in a couple of years are going to have to raise the funds to service and pay back that debt. The investment required to compete in AI is so massive that cheaper competition may not arise, and a small number of (or single) winner could put anyone dependent on AI into a financial bind. Will growth really be exponential if this happens and the benefits aren't clearly worth it?
The best possible outcome may be for the bubble to pop, the current batch of AI companies to go bankrupt, and for AI capability to be built back better and cheaper as computation becomes cheaper.
I'm saying it over and over, AI is not killing dev jobs, offshoring is. The AI hype happens to fall into the end of the pandemic, and lots of companies went to work-from-home and are now hiring cheaper devs around the world.
My experience has been that AI is much more useful on my own systems than on company systems. For AI to (currently) be useful, I need to choose my own tooling and LLM models to support AI centered workflow. At work, I have to use whatever (usually Microsoft) tools my company has chosen to purchase and approve for my corporate computer, and usually nothing works as well as on my own machine where I get to set it up as I want.
I think the 'AI productivity gap' is mostly a state management problem. Even with great models, you burn so much time just manually syncing context between different agents or chat sessions.
Until the handoff tax is lower than the cost of just doing it yourself, the ROI isn't going to be there for most engineering workflows.
I find this difficult to reconcile with things like for example freelance translation being basically wiped out wholesale
Or even the simple utility of having a chatbot. They’re not popular because they’re useless
Which to me says it’s more likely that people under estimate corporate inertia
I’m not sure about this. I’ve been 100% ai since jan/1 and I’m way more productive at producing code.
The non code parts (about 90% of the work) is taking the same amount of time though.
General-purpose technologies tend to have long and uneven diffusion curves. The hype cycle moves faster than organizational change
I like AI and use it daily, but this bubble can’t pop soon enough so we can all return to normally scheduled programming.
CEOs are now on the downside of the hype curve.
They went from “Get me some of that AI!” after first hearing about it, to “Why are we not seeing any savings? Shut this boondoggle down!” now that we’re a few years into bubble, the business math isn’t working, and they only see burning piles of cash.
It's not just technology, it's very hard to detect the effect of inventions in general on productivity. There was a paper pointing out that the invention of the steam engine was basically invisible in the productivity statistics:
Yep just a risk amplifier. We are having a global warming level event in computing and blindly walling into it.
It’s funny because at work we have paid Codex and Claude but I rarely find a use for it, yet I pay for the $200 Max plan for personal stuff and will use it for hours!
So I’m not even in the “it’s useless” camp, but it’s frankly only situationally useful outside of new greenfield stuff. Maybe that is the problem?
Every technology, whether it improved existing systems and productivity or not, created new wealth by creating new services and experiences. So that is what needs to happen with this wave as well.
In other words, everybody is benefiting from AI, except CEOs.
As we approach the singularity things will be more noisy and things will make less and less sense as rapid change can look like chaos from inside the system. I recommend folks just take a deep breath, and just take a look around you. Regardless on your stance if the singularity is real, if AI will revolutionize everything or not, just forget all that noise. just look around you and ask yourself if things are seeming more or less chaotic, are you able to predict better or worse on what is going to happen? how far can your predictions land you now versus lets say 10 or 20 years ago? Conflicting signals is exactly how all of this looks. one account is saying its the end of the world another is saying nothing ever changes and everything is the same as it always was....
BTW the study was from September 2024 to 2025, so its the very earliest of adopters.
thank goodness! our jobs are safe lads!!
Look, that's hardly the point, now, is it, CEOs? AI, or at least saying "AI" a lot, makes number go up.
As a small bespoke manufactuter of things made out of metal, I have recently begun implimenting a policy of abandoning most online services, including banking, well almost as customers can still send me money online, but I have to go to a branch to see or get funds, except for monthly reports. It is awsome, the web brings me customers via 2 web sites, and searches useing AI, but the whole thing is asymetrical, as it has been more than a year since my last online purchase or filling out a form, aplication etc, all done on paper, in person, or I live without whatever it is. The result is a work environment that is focused on customers and production, and external obligations, requirements are litteral, as they must be managed efficiently in person and in such a way as to be finnished or stable, none of the death by 1000 emails brain rot. The mental state of haveing zero knowledge of what is happening on a millisecond by millisecond basis and letting everything go, and lo the world grinds on just fine without me, and I get a few things done. Mr Solow called it long ago, and my intuition has always been that the busy work was shit, and have now proven that in my one specific circumstance.
Mentioning AI in an earnings call means fuck all when what they’re actually referring to is toggling on the permissions for borderline useless copilot features across their enterprise 365 deployments or being convinced to buy some tool that’s actually just a wrapper around API calls to a cheap/outdated OpenAI model with a hidden system prompt.
Yeah, if your Fortune 500 workplace is claiming to be leveraging AI because it has a few dozen relatively tech illiterate employees using it to write their em dash/emoji riddled emails about wellness sessions and teams invites for trivia events… there’s not going to be a noticeable uptick in productivity.
The real productivity comes from tooling that no sufficiently risk adverse pubco IS department is going to let their employees use, because when all of their incentives point to saying no to installing anything ever, the idea of giving the permissions required for agentic AI to do anything useful is a non-starter.
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
[1] https://en.wikipedia.org/wiki/Productivity_paradox