> This also highlights the importance of model design and training. While Claude is able to respond in a highly sophisticated manner, it tends to do so only when users input sophisticated prompts.
If the output of the model depends on the intelligence of the person picking outputs out of its training corpus, is the model intelligent?
This is kind of what I don't quite understand when people talk about the models being intelligent. There's a huge blindspot, which is that the prompt entirely determines the output.
What is a "sophisticated prompt"? What if I just tack on "please think about this a lot and respond in a highly sophisticated manner" to my question/prompt? Anyone can do this once they're made aware of this potential issue. Sometimes the UX layer even adds this for you in the system prompt, you just have to tick the checkbox for "I want a long, highly sophisticated answer".
A smart person will tailor their answers to the perceived level of knowledge of the person asking, and the sophistication of the question is a big indicator of this.
i don't know, are we intelligent?
you could argue that our input (senses) entirely define the output (thoughts, muscle movements, etc)
[dead]
Humans also respond differently when prompted in different ways. For example, politeness often begets politeness. I would expect that to be reflected in training data.