Oh dear, I thought you were merely sarcastic in your first comment. But you seem to have been fully converted to the LLM-religion, and actually believe they actually "think" or "know" anything?
People have applied "think" to the actions of software for decades. Of course it LLM's don't "think" in the human sense, but "What the output of the model indicates in an approximate way about its current internal state" is a bit long winded...
People have applied "think" to the actions of software for decades. Of course it LLM's don't "think" in the human sense, but "What the output of the model indicates in an approximate way about its current internal state" is a bit long winded...