> Why are you asking the token predictor about the tokens it predicted?
I am surprised with this response because it implies this is not an extremely valuable technique. I ask LLMs all the time why they did or output something and they will usually provide extremely useful information. They will help me find where in the prompting I had conflicting or underspecified requirements. The more complex the agent scenario, the more valuable the agent becomes in debugging itself.
Perhaps in this case the problem with hooks is part of the deterministic Claude Code source code, and not under the control of the LLM anyway. So it may not have been able to help.