I've seen plenty of blunders, but in general it's better than their previous models.
Well, it depends a bit on what you mean by blunders. But eg I've seen it confidently assert mathematically wrong statements with nonsense proofs, instead of admitting that it doesn't know.
In a very real sense it doesn’t even know that it doesn’t know.