logoalt Hacker News

matthewkayinyesterday at 6:44 PM0 repliesview on HN

I think it is fair to say that AIs do not yet "understand" what they say or what we ask them.

When I ask it to use a specific MCP to complete a certain task, and it proceeds to not use that MCP, this indicates a clear lack of understanding.

You might say that the fault was mine, that I didn't setup or initialize the MCP tool properly, but wouldn't an understanding AI recognize that it didn't have access to the MCP and tell me that it cannot satisfy my request, rather than blindly carrying on without it?

LLMs consistently prove that they lack the ability to evaluate statements for truth. They lack, as well, an awareness of their unknowing, because they are not trying to understand; their job is to generate (to hallucinate).

It astonishes me that people can be so blind to this weakness of the tool. And when we raise concerns, people always say

"How can you define what 'thinking' is?" "How can you define 'understanding'?"

These philosophical questions are missing the point. When we say it doesn't "understand", we mean that it doesn't do what we ask. It isn't reliable. It isn't as useful to us as perhaps it has been to you.