You can give most of the modern LLMs pretty darn good context and they will still fail. Our company has been deep down this path for over 2 years. The context crowd seems oddly in denial about this
I mean at some point it is probably easier to do the work without AI and at least then you would actually learn something useful instead of spending hours crafting context to actually get something useful out of an AI.
What are some examples where you've provided the LLM enough context that it ought to figure out the problem but it's still failing?