logoalt Hacker News

littlestymaartoday at 11:12 AM1 replyview on HN

> The thinking machines are still babies, whose ideas aren't honed by personal experience; but that will come, in one form or another.

Some machines, maybe. But attention-based LLMs aren't these machines.


Replies

quantummagictoday at 12:34 PM

I'm not sure. If you see what they're doing with feedback already in code generation. The LLM makes a "hallucination", generates the wrong idea, then tests its code only to find out it doesn't compile. And goes on to change its idea, and try again.

show 1 reply