I agree with you a LLM is perfectly capable of explaining its actions.
However it cannot do so after the fact. If there's a reasoning trace it could extract a justification from it. But if there isn't, or if the reasoning trace makes no sense, then the LLM will just lie and make up reasons that sound about right.
So it is equal to what neuroscientists and psychologists have proven about human beings!