Whenever somebody calls LLMs "non-deterministic", assume they meant "chaotic", in the informal sense of being a system where small changes of input can cause large changes to output, and the only way to find out if it will happen is by running the full calculation.
For many applications, this is equally troublesome as true non-determinism.
I don't think LLMs are that chaotic, you can replace words in an input at get a similar answer, and they are very good at dealing with typos.
They are definitely not interpretable, I was reading some stuff from mechanistic interpretability researchers saying they've given up trying to build a bottom up model of how they work.