logoalt Hacker News

strkentoday at 3:47 PM6 repliesview on HN

Human behaviour is goal-directed because humans have executive function. When you turn off executive function by going to sleep, your brain will spit out dreams. Dream logic is famous for being plausible but unhinged.

I have the feeling that LLMs are effectively running on dream logic, and everything we've done to make them reason properly is insufficient to bring them up to human level.


Replies

seanmcdirmidtoday at 4:07 PM

Isn’t a modern LLM with thinking tokens fairly goal directed? But yes, we hallucinate in our sleep while LLMs will hallucinate details if the prompt isn’t grounded enough.

show 2 replies
satvikpendemtoday at 3:56 PM

A prompt for an LLM is also a goal direction and it'll produce code towards that goal. In the end, it's the human directing it, and the AI is a tool whose code needs review, same as it always has been.

show 1 reply
whoamiitoday at 3:56 PM

Some of my best code comes from my dreams though.

tsunamifurytoday at 3:59 PM

It’s amazing how much you get wrong here. As LLM attention layers are stacked goal functions.

What they lack is multi turn long walk goal functions — which is being solved to some degree by agents.

nemo44xtoday at 3:58 PM

LLMs are literally goal machines. It’s all they do. So it’s important that you input specific goals for them to work towards. It’s also why logically you want to break the problem into many small problems with concrete goals.

show 1 reply
spiderfarmertoday at 3:57 PM

And yet LLM’s are incredibly useful as they are right now.