logoalt Hacker News

Coding assistants are solving the wrong problem

115 pointsby jinhkuantoday at 4:25 AM65 commentsview on HN

Comments

micwtoday at 7:08 AM

For me, AI is an enabler for things you can't do otherwise (or that would take many weeks of learning). But you still need to know how to do things properly in general, otherwise the results are bad.

E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.

Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.

show 7 replies
Quothlingtoday at 7:05 AM

I think AI will fail in any organisation where the business process problems are sometimes discuvered during engineering. I use AI quite a lot, I recently had Claude upgrade one of our old services from hubspot api v1 to v3 without basically any human interaction beyond the code review. I had to ask it for two changes I think, but over all I barely got out of my regular work to get it done. I did know exactly what to ask of it because the IT business partners who had discovered the flaw had basically written the tasks already. Anyway. AI worked well there.

Where AI fails us is when we build new software to improve the business related to solar energy production and sale. It fails us because the tasks are never really well defined. Or even if they are, sometimes developers or engineers come up with a better way to do the business process than what was planned for. AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to do X first. If we only did code-reviews then we would miss that step.

In a perfect organisation your BPM people would do this. In the world I live in there are virtually no BPM people, and those who know the processes are too busy to really deal with improving them. Hell... sometimes their processes are changed and they don't realize until their results are measurably better than they used to be. So I think it depends a lot on the situation. If you've got people breaking up processes, improving them and then decribing each little bit in decent detail. Then I think AI will work fine, otherwise it's probably not the best place to go full vibe.

show 4 replies
bambaxtoday at 7:59 AM

> Unlike their human counterparts who would and escalate a requirements gap to product when necessary, coding assistants are notorious for burying those requirement gaps within hundreds of lines of code

This is the kind of argument that seems true on the surface, but isn't really. An LLM will do what you ask it to do! If you tell it to ask questions and poke holes into your requirements and not jump to code, it will do exactly that, and usually better than a human.

If you then ask it to refactor some code, identify redundancies, put this or that functionality into a reuseable library, it will also do that.

Those critiques of coding assistants are really critiques of "pure vibe coders" who don't know anything and just try to output yet another useless PDF parsing library before they move on to other things.

show 2 replies
fpolingtoday at 8:28 AM

I have found that using Cursor to write in Rust what I previously would write as a shell or Python or jq script was rather helpful.

The datasets are big and having the scripts written in the performant language to process them saves non-trivial amounts of time, like waiting just 10 minutes versus an hour.

Initial code style in the scripts was rather ugly with a lot of repeated code. But with enough prompting that I reuse the generated code became sufficiently readable and reasonable to quickly check that it is indeed doing what was required and can be manually altered.

But prompting it to do non-trivial changes to existing code base was a time sink. It took too much time to explain/correct the output. And critically the prompts cannot be reused.

Arch-TKtoday at 8:45 AM

"Experienced developers were 19% slower when using AI coding assistants—yet believed they were faster (METR, 2025)"

Anecdotally I see this _all the time_...

show 1 reply
helloplanetstoday at 7:44 AM

The writeup is a bit contrived in my opinion. And sort of misrepresenting what users can do with tools like Claude Code.

Most coding assistant tools are flexible to applying these kinds of workflows, and these sorts of workflows are even brought up in Anthropic's own examples on how to use Claude Code. Any experienced dev knows that the act of specifically writing code is a small part of creating a working program.

rcarmotoday at 8:37 AM

I think that the premise is wrong (and the title is very clickbaity, but we will ignore that it doesn’t really match the article and the “conclusion”): coding agents are “solving” at least one problem, which is to massively expand the impact of senior developers _that can use them effectively_.

Everything else is just hype and people “holding it wrong”.

show 1 reply
28304283409234today at 8:34 AM

I barely use ai as a coding assistant. I use it as a product owner. Works wonders. Especially in this age of clueless product owners.

zmmmmmtoday at 7:48 AM

this concept of bottlenecking on code review is definitely a problem.

Either you (a) don't review the code, (b) invest more resources in review or (c) hope that AI assistance in the review process increases efficiency there enough to keep up with code production.

But if none of those work, all AI assistance does is bottleneck the process at review.

show 2 replies
foxestoday at 7:41 AM

So basically - "ai" - actually llms - are decent at what they are trained at - producing plausible text with a bunch of structure and constraints - and a lot of programming, boring work emails, reddit/hn comments, etc can fall into that. It still requires understanding to know when that diverges from something useful, it still is just plausible text, not some magic higher reasoning.

Are they something worth using up vast amounts of power and restructuring all of civilisation around? No

Are they worth giving more power to megacorps over? No

Its like tech doesn't understand consent and then partially the classic case of "disrupting x" - thinking that you know how to solve something in maths, cs, physics and then suddenly that means you can solve stuff in a completely different field.

llms are over indexed.

newswasboringtoday at 8:58 AM

Isn't this proposal closely matching with the approach OpenSpec is taking? (Possibly other SDD tool kits, I'm just familiar with this one). I spend way more time in making my spec artifacts (proposal, design, spec, tasks) than I do in code review. During generation of each of these artifacts the code is referenced and surfaces at least some of the issues which are purely architecture based.

monero-xmrtoday at 6:00 AM

First you must accept that engineering elegance != market value. Only certain applications and business models need the crème de le crème of engineers.

LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.

show 6 replies
verdvermtoday at 6:54 AM

meh piece, don't feel like I learned anything from it. Mainly words around old stats in a rapidly evolving field, and then trying to pitch their product

tl;dr content marketing

There is this super interesting post in new about agent swarms and how the field is evolving towards formal verification like airlines, or how there are ideas we can draw on. Any, imo it should be on the front over this piece

"Why AI Swarms Cannot Build Architecture"

An analysis of the structural limitations preventing AI agent swarms from producing coherent software architecture

https://news.ycombinator.com/item?id=46866184

show 1 reply
tanveergilltoday at 9:23 AM

[dead]

zkmontoday at 7:01 AM

Wondering why is ths on front page? There is hardly any new insight other than a few minutes of exposure to greenish glow that makes everything looks brownish after you close that page.

show 1 reply