Right this is what I can’t quite understand. A lot of HN folks appear to have been burned by e.g. horrible corporate or business ideas by non technical people that don’t understand AI, that is completely understandable. What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago. The progress has moved astoundingly fast and the sheer amount of capital and competition and pressure means the train is not slowing down. Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
> What I never understand is the population of coders that don’t see any value in coding agents or are aggressively against them, or people that deride LLMs as failing to be able to do X (or hallucinate etc) and are therefore useless and every thing is AI Slop, without recognizing that what we can do today is almost unrecognizeable from the world of 3 years ago.
I don't recognize that because it isn't true. I try the LLMs every now and then, and they still make the same stupid hallucinations that ChatGPT did on day 1. AI hype proponents love to make claims that the tech has improved a ton, but based on my experience trying to use it those claims are completely baseless.
There is zero guarantee that these tools will continue to be there. Those of us who are skeptical of the value of the tools may find them somewhat useful, but are quite wary of ripping up the workflows we've built for ourselves over decade(s)(+) in favor of something that might be 10-20% more useful, but could be taken away or charged greater fees or literally collapse in functionality at any moment, leaving us suddenly crippled. I'll keep the thing I know works, I know will always be there (because it's open source, etc), even if it means I'm slightly less productive over the next X amount of time otherwise.
AI is in a hype bubble that will crash just like every other bubble. The underlying uses are there but just like Dot Com, Tulips, subprime mortgages, and even Sir Isaac Newton's failings with the South Sea Company the financial side will fall.
This will cause bankruptcies and huge job losses. The argument for and against AI doesn't really matter in the end, because the finances don't make a lick of sense.
Maybe those people do different work than you do? Coding agents don’t work well in every scenario.
> Predictions of “2025 is the year of coding agents” from a chorus of otherwise unpalatable CEOs was in fact absolutely true…
... but maybe not in the way that these CEOs had hoped.[0]
Part of the AI fatigue is that busy, competent devs are getting swarmed with massive amounts of slop from not-very-good developers. Or product managers getting 5 paragraphs of GenAI bug reports instead of a clear and concise explanation.
I have high hopes for AI and think generative tooling is extremely useful in the right hands. But it is extremely concerning that AI is allowing some of the worst, least competent people to generate an order of magnitude more "content" with little awareness of how bad it is.
[0] https://github.com/ocaml/ocaml/pull/14369