logoalt Hacker News

rahidztoday at 12:23 PM3 repliesview on HN

For GPT at least, a lot of it is because "DO NOT ASK A CLARIFYING QUESTION OR ASK FOR CONFIRMATION" is in the system prompt. Twice.

https://github.com/Wyattwalls/system_prompts/blob/main/OpenA...


Replies

siva7today at 3:52 PM

So this system prompt is always there, no matter if i'm using chatgpt or azure openai with my own provisioned gpt? This explains why chatgpt is a joke for professionals where asking clarifying questions is the core of professional work.

briHasstoday at 2:13 PM

It's interesting how much focus there is on 'playing along' with any riddle or joke. This gives me some ideas for my personal context prompt to assure the LLM that I'm not trying to trick it or probe its ability to infer missing context.

benterixtoday at 12:59 PM

Out of curiosity: when you add custom instructions client-side, does it change this behavior?

show 3 replies