logoalt Hacker News

mfroyesterday at 7:21 PM9 repliesview on HN

I think you're misunderstanding the paradigm shift completely -- AI does not just generate code N(x) more quickly. It thinks N(x) faster, it researches N(x) faster, it tests N(x) faster. There are hundreds of tasks that you'll find engineers are offloading to AI every day. The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows. This is why tool-use and agentic workflows have taken engineering by storm.


Replies

michaelchisariyesterday at 8:08 PM

Debugging, sanity checking, testing, etc. are the best uses of LLMs. Much better than writing code.

Developers should write their own code and use LLMs to design and verify. Better, faster architecture and planning, pre-cleaned PRs and no skill atrophy or loss of understanding on the part of the developer.

show 2 replies
gerdesjtoday at 12:09 AM

"paradigm shift"

A paradigm shift is an earth shattering, very important change - a complete change in thinking etc. LLMs are not that. They are simply some pretty new tools. Nice tools but they will whip off your metaphorical thumb just as quickly as a miss-used table saw.

You'll note that you mention "engineers are offloading": that's not a paradigm shift. That's a bunch of engineers discovering a better slide rule.

I'm old enough to remember moving on from slide rules (I still have mine) through calculators (ditto) to using fag packets and napkins for their real intended purpose.

The drill-driver also took engineering by storm but no-one ever used the term paradigm shift (to be fair, I don't think it was invented at the time and I can't be arsed to look it up).

show 1 reply
oytisyesterday at 8:51 PM

The article addresses exactly this objection. Most importantly, it quotes that AI coding tools have a detrimental effect on software stability - which is basically raison d'etre for our profession. When it produces more robust software and handles on-call shifts better than humans, I will consider programming done.

show 1 reply
zapataband1today at 12:31 AM

I think you are misunderstanding something, AI does not think, it is a token prediction algorithm.

paganelyesterday at 9:41 PM

> , it tests N(x) faster.

It does? You mean "it tests itself faster", which is not really a test now, is it?

show 2 replies
imiricyesterday at 9:21 PM

> The major hurdle right now is actually pivoting LLMs from just generating code: integrating those tasks into workflows.

Funny, I thought that the major hurdle is improving accuracy and reliability, as it's always been. Engineering is necessary and useful, but it's a much simpler problem, which is why everyone is jumping on it.

show 1 reply
brcmthrowawayyesterday at 7:36 PM

True. Knowledge workers are cooked.

pingouyesterday at 7:43 PM

Not sure why you are downvoted but I agree. Additionally, perhaps LLMs are just like another higher programming language as the author said, and they still need someone to steer them.

I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

I think LLMs will be (and already are) useful for many more things than programming anyway.

show 2 replies
dgellowyesterday at 8:01 PM

Claude connected to a Postgres (readonly obviously) and Datadog MCP servers in addition to access to the codebase can debug prod issues so quickly. That’s easily a 10x win compared to a senior engineer doing the exact same debugging steps. IMHO that’s where the actual productivity boost is

show 1 reply