logoalt Hacker News

mlsuyesterday at 10:55 PM6 repliesview on HN

> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.

If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?

This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.


Replies

TrainedMonkeyyesterday at 11:08 PM

Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.

zozbot234yesterday at 11:17 PM

What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".

show 1 reply
wat10000yesterday at 11:03 PM

A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.

thousand_nightsyesterday at 11:03 PM

i don't know, are we intelligent?

you could argue that our input (senses) entirely define the output (thoughts, muscle movements, etc)

show 1 reply
sjajshhayesterday at 10:58 PM

[dead]