>Why are you asking the token predictor about the tokens it predicted?
In fairness, humans are quite bad at this as well. You can do years of therapy and discover that while you thought/told people that you did X because of Y, that you actually did X because of Z.
Most people don't actually understand why they do the things they do. I'm not entirely unconvinced that therapy isn't just something akin to filling your running context window in an attempt to understand why your neurons are weighted the way they are.
I think the use of 'most' and carte-blanch "things they do" to be overreaching. "Some things", and "some people" perhaps.
Yet that has no relevance to an LLM, which is not a human, and does not think. You're basically calling a record playing birdsong, a bird, because one mimics the other.
Why are you comparing a machine to humans. They both clearly operate differently on a fundamental level.
Would therapy work on an LLM?