Depending on how you convert synapse count to parameters, the brain also has something like a thousand trillion parameters. In that light it's pretty darn surprising that an artificial neural network can produce anything like coherent text.
It indeed is. We now have models less than 100M params producing pretty coherent, and somewhat relevant text to give input. That is indeed impressive.
I believe the answer lies in how "quickly" (and how?) we are able to learn, and then generalize those learnings as well. As of now, these models need millions (at least) examples to learn, and are still not capable of generalizing the learnings to other domains. Human brains hardly need a few, and then, they generalize those pretty well.
Maybe the brain is more akin to a network of networks and the actual reasoning part is not all that large? There are lots of areas dedicated exclusively to processing input and controlling subsystems. I can imagine a future where large artificial networks work in a similar way, with multiple smaller ones connected to each other.