logoalt Hacker News

IanCalyesterday at 7:27 AM1 replyview on HN

> it's just auto-completing. It cannot reason

Auto completion just means predicting the next thing in a sequence. This does not preclude reasoning.

> I don't get why you would say that.

Because I see them solve real debugging problems talking through the impact of code changes or lines all the time to find non-obvious errors with ordering and timing conditions on code they’ve never seen before.


Replies

notepad0x90yesterday at 1:44 PM

> This does not preclude reasoning.

It does not imply it either. to claim reasoning you need evidence. it needs to reliably NOT hallucinate results for simple conversations for example (if it has basic reasoning).

> Because I see them solve real debugging problems talking through the impact of code changes or lines all the time to find non-obvious errors with ordering and timing conditions on code they’ve never seen before.

Programming languages and how programs work are extensively and abundantly documented, solutions to problems and how to approach them,etc.. have been documented on the internet extensively. It takes all of that data and it completes the right text by taking the most correct path way based on your input. it does not actually take your code and debug it. it is the sheer volume of data it uses and the computational resources behind it that are making it hard to wrap your head around the difference between guessing and understanding. You too can look at enough stack overflow and (poorly) guess answers for questions without understanding anything about the topic and if you guess enough you'll get some right. LLMs are just optimized to get the amount of correct responses to be high.

show 1 reply