logoalt Hacker News

timacleslast Saturday at 8:14 PM2 repliesview on HN

> A sufficiently good simulation of understanding is functionally equivalent to understanding.

This is just a thing to say that has no substantial meaning.

  - What is "sufficiently" mean? 
  - What is functionally equivalent? 
  - and what is even understanding?
All just vague hand waving

We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

> At that point, the question of whether the model really does understand is pointless.

You're right it is pointless, because you are suggesting something that doesnt exist. And the current models cannot understand


Replies

og_kalulast Sunday at 5:25 AM

>We're not philosophizing here, we're talking about practical results and clearly, in the current context, it does not deliver in that area.

Except it clearly does, in a lot of areas. You can't take a 'practical results trump all' stance and come out of it saying LLMs understand nothing. They understand a lot of things just fine.

DiogenesKynikoslast Sunday at 4:44 AM

The current models obviously understand a lot. They would easily understand your comment, for example, and give an intelligent answer in response. The whole "the current models cannot understand" mantra is more religious than anything.