An alternative is to view the AI agent as a new developer on your team. If existing guidance + one-shot doesn't work, revisit the documentation and guidance (ie dotMD file), see what's missing, improve it, and try again. Like telling a new engineer "actually, here is how we do this thing". The engineer learns and next time gets it right.
I don't do MCPs much because of effort and security risks. But I find the loop above really effective. The alternative (one-shot or ignore) would be like hiring someone, then if they get it wrong, telling them "I'll do it myself" (or firing them)... But to each his own (and yes, AI are not human).
> The engineer learns and next time gets it right.
Antropomorphizing LLMs like that is the path to madness. That's where all the frustration comes from.
I don't think you can say it learns - and that is part of the issue. Time mentoring a new colleague is well spent making the colleague grow professionally.
Time hand-holding an AI agent is wasted when all you guidance inevitably falls out of the context window and it start making the same mistakes again.