logoalt Hacker News

bonessstoday at 8:15 AM0 repliesview on HN

> AI can write the code, but it doesn't refuse to write the code without first being told why it wouldn't be a better idea to…

LLMs combine two dangerous traits simultaneously: they are non-critical about suboptimal approaches and they assist unquestioningly. In practice that means doing dumb things a lazy human would refuse because they know better, and then following those rabbit holes until they run out of imaginary dirt.

My estimation is that that combination undermines their productivity potential without very structured application. Considering the excess and escalating costs of dealing with issues as they arise further from the developers work station (by factors of approximately 20x, 50x, and 200x+ as you get out through QA and into customer environments (IIRC)), you don’t need many screw ups to make the effort net negative.