For the case of Propositional Logic, ChatGPT reflects the current epistemological crisis. When asking for help on a question, it could not properly apply the Law of the Excluded Middle [1].
1. https://chatgpt.com/share/696b7f8a-9760-8006-a1b5-89ffd7c5d2...
I would love for this to turn out to be some internal constraint where the LLM can not ‘reason’ about LEM and will always go to an understanding based in constructive logic. However, I am more ready to accept that LLM aren’t actually ‘reasoning’ about anything and it’s an inherent flaw in how we talk about the algorithms as though they were actually thinking ‘minds’ instead of very fancy syntax completion machines.