There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.
Why would that be more likely? Seems like OP and ChatGPT (which is just many people of different skill levels) might easily make the same failure to communicate. Many failures of ChatGPT are failures to communicate or to convey structured thinking.