I think 'average' is creating a bad intuition here. In order to accurately predict the next word in a human generated text, you need a model of the big picture of what is being said. You need a model of what is real and what is not real. You need a model of what it's like to be a human. The number of possible texts is enormous which means that it's not like you can say "There are lots of texts that start with the same 50 tokens, I'll average the 51st token that appears in them to work out what I should generate". The subspace of human generated texts in the space of all possible texts is extremely sparse, and 'averaging' isn't the best way to think of the process.