Are you employed at Google or OpenAI? Are you working on these frontier models?
In the case of medical questions it needs to know further details to provide a relevant diagnosis. That is how it was trained.
In other cases you can observe its reasoning process to see why it would decide to request further details.
I have never seen an LLM just ask questions for the sake of asking. It is always relevant in the context. I don't use them casually. Just wrote a couple of handbooks (~100 pages in a few days). Generating tens of thousands of tokens per session with Gemini.
typical patterns to look out for:
- "Should I now give you the complete [result], fulfilling [all your demands]?"
- "Just say [go] and I will do it"
- "Do you want either [A, B, or C]"
- "In [5-15] minutes I will give you the complete result"
...