I find the LLMs target their language to the audience, so instead you could say, “I am Dutch so give it to me straight.”
In my usage the LLMs gives much smarter answers when I’ve been able to convince it that I am smart enough to hear them. It doesn’t take my word for it, it seems to require evidence. I have to warm it up with some exercises where I can impress the AI.
The coding focused models seem to have much lower agreeableness than the chat models.
I think modern LLMs can determine if you're speaking Dutch. That's a trick that probably hasn't worked since GPT 3.
I'm 90 percent sure the coding agents are better in that way due to be trained on stack overflow and the LKML. Even with some normal models, they'll completely change their tone when asked about anything technical