logoalt Hacker News

sitkack05/15/20252 repliesview on HN

You are just doubling down on protecting your argument.

I operate LLMs in many conversational modes where it does ask clarifying questions, probing questions, baseline determining questions.

It takes at most one sentence in the prompt to get them to act this way.


Replies

bigcat1234567805/15/2025

> It takes at most one sentence in the prompt to get them to act this way.

What is this one sentence you are using?

I am struggling to elicite clarification behavior form llms

show 2 replies
sandspar05/15/2025

Could you share your prompt to get it to ask clarifying questions? I'm wondering if it would work in custom instructions.

show 1 reply