logoalt Hacker News

nicboutoday at 9:46 AM4 repliesview on HN

I get that issue constantly. I somehow can't get any LLM to ask me clarifying questions before spitting out a wall of text with incorrect assumptions. I find it particularly frustrating.


Replies

rahidztoday at 12:23 PM

For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.

https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...

show 2 replies
ash_091today at 11:48 AM

"If you're unsure, ask. Don't guess." in prompts makes a huge difference, imo.

show 1 reply
Pxtltoday at 10:23 AM

In general spitting out a scrollbar of text when asked a simple question that you've misunderstood is not, in any real sense, a "chat".

mk89today at 11:10 AM

The way I see it is that long game is to have agents in your life that memorize and understand your routine, facts, more and more. Imagine having an agent that knows about cars, and more specifically your car, when the checkups are due, when you washed it last time, etc., another one that knows more about your hobbies, another that knows more about your XYZ etc.

The more specific they are, the more accurate they typically are.