One disanalogy between human language use and LLMs is that language evolved to fit the human brain, which was already structured by millions of years of primate social life. This is more or less the reverse situation to a neural network trained on a large text corpus.
Yes, but animal/human brains (cortex) appear to have evolved to be prediction machines, originally mostly predicting evolving sensory inputs (how external objects behave), and predicting real-world responses to the animal's actions.
Language seems to be taking advantage of this pre-existing predictive architecture, and would have again learnt by predicting sensory inputs (heard language), which as we have seen is enough to induce ability to generate it too.