The other thing is that it will make an earnest attempt to answer the question. On the other hand with places like SO, many questions will be incorrectly marked as duplicate with the “answer” link pointing to a post that might seem similar at first glance but is different enough to not actually be the same, which is supremely unhelpful.
You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.
Yep, this is exactly what I mean!
Personally, I find that even when it's wrong, it's often useful, in that I come away with hints toward how to follow up.
I do have concerns that people who haven't lived a couple decades of adult life prior to the existence of these tools will be a lot more credulous, to their detriment.