Alternatively, I've gotten exactly what I wanted from an LLM by giving it information that would not be enough for a human to work with, knowing that the llm is just going to fill in the gaps anyway.
It's easy to forget that the conversation itself is what the LLM is helping to create. Humans will ignore or depriotitize extra information. They also need the extra information to get an idea of what you're looking for in a loose sense. The LLM is much more easily influenced by any extra wording you include, and loose guiding is likely to become strict guiding
Yeah, it's definitely not a human! But it is often the case in my experience that problems in your context are quite obvious once looked at through a human lens.
Maybe not very often in a chat context, my experience is in trying to build agents.