logoalt Hacker News

SunshineTheCatlast Thursday at 5:39 PM4 repliesview on HN

I know this won't be popular, however, I think the idea of differentiating a "real developer" from one who relies mostly, or even solely on an LLM is coming to an end. Right now, I fully agree relying wholly upon an LLM and failing to test it is very irresponsible.

LLMs do make mistakes. They do a sloppy job at times.

But give it a year. Two years. five years. It seems unreasonable to assume they will hit a plateau that will prevent them from being able to build, test, and ship code better than any human on earth.

I say this because it's already happened.

It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

There was too much "art," experience, and nuance to the game that a computer could ever fully grasp or understand. Sure there was the "math" of it all, but it lacked the human intuition that many thought were essential to winning and could only be achieved through a lifetime of practice.

Many years following Deep Blue vs. Garry Kasparov, the best players in the world laugh at the idea of even getting close to beating Stockfish or any other even mediocre game engine.

I say all of this as a 15-year developer. This happens over and over again throughout history. Something comes along to disrupt an industry or profession and people scream about how dangerous or bad it is, but it never matters in the end. Technology is undefeated.


Replies

gitaarikyesterday at 5:11 AM

Yes, we're already there, and the human responsibilities are shifting from engineering to architecting. The AI does the execution, the human makes the decisions. Because LLMs can never make decisions fully by themselves, because they need to be programmed by humans, otherwise they go out of sync with what we actually want.

newsofthedaylast Thursday at 5:42 PM

> There was too much "art," experience, and nuance to the game that a computer could ever fully grasp or understand.

That's the thing though, AI doesn't understand, it makes us feel like it understands, but it doesn't understand anything.

show 1 reply
xmodemlast Thursday at 6:04 PM

What's your point, though? Let's assume your hypothesis and 5 years from now everyone has access to an LLM that's as good as a typical staff engineer. Is it now acceptable for a junior engineer to submit LLM-generated PRs without having tested them?

> It was thought impossible for a computer to reach the point of being able to beat a grandmaster at chess.

This is oft-cited but it takes only some cursory research to show that it has never been close to a universally-held view.

show 2 replies
JackSlateurlast Thursday at 5:48 PM

  This happens over and over again throughout history.
Could you share a single instance of a machine that think ? Are we sharing the same timeline ?