People have applied "think" to the actions of software for decades. Of course it LLM's don't "think" in the human sense, but "What the output of the model indicates in an approximate way about its current internal state" is a bit long winded...