What are some examples where you've provided the LLM enough context that it ought to figure out the problem but it's still failing?