The AI can't really describe its reasoning, though. It can only look at its context history and find a justification (which it will then present as reasoning). In my experience asking the model "why did you do that" carries substantial hallucination risk.
While you're technically correct, I found that a simple "give me the strongest arguments for and against this, cite your sources" works wonders.
True, though I have found that forcing (I use an agent skill to do this) an LLM's agent to document the reasoning behind each "decision" it makes seems to lead to better decision-making. Or at least, more justifiable decisions (even if the justification is bad).