They are clearly getting to useful and meaningful results with at a rate significantly better than chance (for example, the fact that ChatGPT can play chess well even though it sometimes tries to make illegal moves shows that there is a lot more happening there than just picking moves uniformly at random). Demanding perfection here seems to be odd given that humans also can make bizarre errors in reasoning (of course, generally at a lower rate and in a distribution of kinds of errors we are more used to dealing with).
The fact that a model trained on the internet, on which the correct rules of chess are written, is unable to determine what is and is not a legal move, seems like a sign that these models are not reasoning about the questions asked of them. They are just giving responses that look like (and often are) correct chess moves.