> This is just semantics.
Precisely my point:
semantics: the branch of linguistics and logic concerned with meaning.
> You can say they don't understand, but I'm sitting here with Nano Banana Pro creating infographics, and it's doing as good of a job as my human designer does with the same kinds of instructions. Does it matter if that's understanding or not?Understanding, when used in its unqualified form, implies people possessing same. As such, it is a metaphysical property unique to people and defined wholly therein.
Excel "understands" well-formed spreadsheets by performing specified calculations. But who defines those spreadsheets? And who determines the result to be "right?"
Nano Banana Pro "understands" instructions to generate images. But who defines those instructions? And who determines the result to be "right?"
"They" do not understand.
You do.
You can, of course, define understanding as a metaphysical property that only people have. If you then try to use that definition to determine whether a machine understands, you'll have a clear answer for yourself. The whole operation, however, does not lead to much understanding of anything.
The visual programming language for programming human and object behavior in The Sims is called "SimAntics".
> it is a metaphysical property unique to people
So basically your thesis is also your assumption.
"This is just semantics" is a set phrase in English and it means that the issue being discussed is merely about definitions of words, and not about the substance (the object level).
And generally the point is that it does not matter whether we call what they do "understanding" or not. It will have the same kind of consequences in the end, economic and otherwise.
This is basically the number one hangup that people have about AI systems, all the way back since Turing's time.
The consequences will come from AI's ability to produce certain types of artifacts and perform certain types of transformations of bits. That's all we need for all the scifi stuff to happen. Turing realized this very quickly, and his famous Turing test is exactly about making this point. It's not an engineering kind of test. It's a thought experiment trying to prove that it does not matter whether it's just "simulated understanding". A simulated cake is useless, I can't eat it. But simulated understanding can have real world effects of the exact same sort as real understanding.