I get that issue constantly. I somehow can't get any LLM to ask me clarifying questions before spitting out a wall of text with incorrect assumptions. I find it particularly frustrating.
"If you're unsure, ask. Don't guess." in prompts makes a huge difference, imo.
In general spitting out a scrollbar of text when asked a simple question that you've misunderstood is not, in any real sense, a "chat".
The way I see it is that long game is to have agents in your life that memorize and understand your routine, facts, more and more. Imagine having an agent that knows about cars, and more specifically your car, when the checkups are due, when you washed it last time, etc., another one that knows more about your hobbies, another that knows more about your XYZ etc.
The more specific they are, the more accurate they typically are.
For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.
https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...