I mean humans communicate the same way. We don't interpret the words literally and neither does the LLM. We think about what one is trying to communicate to the other.
For example If you ask someone "can you tell me what time it is?", the literal answer is either "yes"/"no". If you ask an LLM that question it will tell you the time, because it understands that the user wants to know the time.
very fair! wild to think about though. It's both more human but also less.
I would say this behavior now no longer passes the Turing test for me--if I asked a human a question about code I wouldn't expect them to return the code changes; i would expect the yes/no answer.