And the big players have built a bunch of workflows which embed many other elements besides just "predictions" into their AI product. Things like web search, to incorporating feedback from code testing, to feeding outputs back into future iterations. Who is to say that one or more of these additions has pushed the ensemble across the threshold and into "real actual thinking."
The near-religious fervor which people insist that "its just prediction" makes me want to respond with some religious allusions of my own:
> Who is this that wrappeth up sentences in unskillful words? Gird up thy loins like a man: I will ask thee, and answer thou me. Where wast thou when I laid up the foundations of the earth? tell me if thou hast understanding. Who hath laid the measures thereof, if thou knowest? or who hath stretched the line upon it?
The point is that (as far as I know) we simply don't know the necessary or sufficient conditions for "thinking" in the first place, let alone "human thinking." Eventually we will most likely arrive at a scientific consensus, but as of right now we don't have the terms nailed down well enough to claim the kind of certainty I see from AI-detractors.
I completely agree that we don't know enough, but I suggest that that entails that the critics and those who want to be cautious are correct.
The harms engendered by underestimating LLM capabilities are largely that people won't use the LLMs.
The harms engendered by overestimating their capabilities can be as severe as psychological delusion, of which we have an increasing number of cases.
Given we don't actually have a good definition of "thinking" what tack do you consider more responsible?
I take a offence in the idea I’m “religiously downplaying LLMs”. I pay top dollar for access to the best models because I want the capabilities to be good / better. Just because I’m documenting my experience it doesn’t mean I have an Anti-ai agenda ? I pay because I find LLMs to be useful. Just not in the way suggested by the marketing teams.
I’m downplaying because I have honestly been burned by these tools when I’ve put trust in their ability to understand anything, provide a novel suggestion or even solve some basic bugs without causing other issues.?
I use all of the things you talk about extremely frequently and again, there is no “thinking” or consideration on display that suggests these things work like us, else why would we be having this conversation if they were ?