Well, yes, definitionally they are doing exactly that.
It just turns out that there's quite a bit of knowledge and understanding baked into the relationships of words to one another.
LLMs are heavily influenced by preceding words. It's very hard for them to backtrack on an earlier branch. This is why all the reasoning models use "stop phrases" like "wait" "however" "hold on..." It's literally just text injected in order to make the auto complete more likely to revise previous bad branches.