logoalt Hacker News

autoexecyesterday at 5:55 PM0 repliesview on HN

> I wonder to what extent this surprise is because people tend to think very deeply when writing software and assume thinking and "reasoning" are what produce quality software.

It takes deep thought and reasoning to produce good code. LLMs don't think or reason. They don't have to though because humans have done all of that for them. They just have to regurgitate what humans have already done. Everything good an LLM outputs came from the minds of humans who did all the real work. Sometimes they can assemble bits of human generated code in ways that do something useful, just like someone copying and pasting code out of stack exchange without understanding any of it can sometimes slap something together that does something useful.

LLMs are a neat party trick, and it can be surprising to see what they do and fun to see where they fail, but it all says very little about what it means to think and reason or even what it means to write software.