You are just doubling down on protecting your argument.
I operate LLMs in many conversational modes where it does ask clarifying questions, probing questions, baseline determining questions.
It takes at most one sentence in the prompt to get them to act this way.
Could you share your prompt to get it to ask clarifying questions? I'm wondering if it would work in custom instructions.
> It takes at most one sentence in the prompt to get them to act this way.
What is this one sentence you are using?
I am struggling to elicite clarification behavior form llms