You could also say that ChatGPT erred similarly to the original writer, who was unclear and misleading about events.
We needn't act like they share some grand enlightenment. It's just not well expressed. ChatGPT's output is also frequently not well expressed and not well thought out.
There's many more ways to err than to get something right. ChatGPT getting OP right where many people here didn't tells us it's more likely that there is a particular style of writing/thinking that is not obvious to everyone, but ChatGPT can identify and understand, rather than just both OP and ChatGPT accidentally making exactly the same error.