Anthropomorphizing LLMs is something that happens in the design stage, when they're given human names and trained to emit first-person sentences. If AI companies and developers stop anthropomorphizing them, users won't be misled in the first place.