It indeed is. We now have models less than 100M params producing pretty coherent, and somewhat relevant text to give input. That is indeed impressive.
I believe the answer lies in how "quickly" (and how?) we are able to learn, and then generalize those learnings as well. As of now, these models need millions (at least) examples to learn, and are still not capable of generalizing the learnings to other domains. Human brains hardly need a few, and then, they generalize those pretty well.