You can treat the LLM's answers ass hypotheses about why it did what it did, and test those hypotheses. The hypotheses the LLM comes up with might be better than the ones you come up with, because the LLM has seen a lot more text than you have, and particularly has seen a lot more of its own outputs than you have (e.g. from training to use other instances of itself as subagents).