logoalt Hacker News

tvinkyesterday at 5:49 PM2 repliesview on HN

If it is verifiable, please show us. What if clear to you reeks delusion to me.


Replies

svntyesterday at 6:03 PM

Look at any recent CoT output where the model is trying to infer from an underspecified prompt what the user wants or means.

It is generally the first thing they do — try to figure out what did you mean with this prompt. When they can’t infer your intent, good models ask follow-on questions to clarify.

I am wondering if this is a semantics issue as this is an established are of research, eg https://arxiv.org/pdf/2501.10871

show 1 reply
atleastoptimalyesterday at 6:59 PM

Go ask Chatpgpt this prompt

"A guy goes into a bank and looks up at where the security cameras are pointed. What could he be trying to do?"

It very easily captures the intent behind behavior, as in it is not just literally interpreting the words. All that capturing intent is is just a subset of pattern recognition, which LLM's can do very well.

show 4 replies