Simple: You can ask a LLM and can get a good explanation for why it did something, that will help you avoid bad behavior next time.
Is that reasoning? Does it know? I might care about those questions in another context but here I don't have to. It simply works (not all the time, but increasingly so with better models in my experience.)
Nah many times I ask Claude about its behavior, features etc and it either tells me to check the Anthropic web site or goes look for it in the web site itself (useless most of the time).