Yes! But it's still valuable. How am I understanding your argument at all?
I think my friend Jonathan Rees put it best:
"Language is a continuous reverse engineering effort, where both sides are trying to figure out what the other side means."
More on that: https://dustycloud.org/blog/identity-is-a-katamari/This reverse engineering effort is important between you and me, in this exchange right here. It is a battle that can never be won, but the fight of it is how we make progress in most things.
I mean, Quine invented (the term) holism. I don't think we're on different pages. Maybe I should've specified a bit more what I was getting at.
This has very specific implications in symbolic ai specifically where historically the goal was mapping out the 'correct' representation of the space, then running formal analysis over it. That's why it's not a black box - you can trace out all of the steps. The issue is, is that symbolic AI just doesn't work. To my knowledge, as compared to all the DL wins we have.
I think the win of transformers proves that symbolic AI isn't the way. At the very least, the complex interactions that arise from in-context learning clearly in no way imply some fixed universal meaning for words, which is a big problem for symbolic AI.