Just 2 days ago Gemini 2.5 Pro tried to recommend me tax evasion based on non-existing laws and court decisions. The model was so charming and convincing, that even after I brought all the logic flaws and said that this is plain wrong, I started to doubt myself, because it is so good at pleasing, arguing and using words.
And most would have accept the recommendation because the model sold it as less common tactic, while sounding very logical.
> even after I brought all the logic flaws and said that this is plain wrong
Once you've started to argue with an LLM you're already barking up the wrong tree. Maybe you're right, maybe not, but there's no point in arguing it out with an LLM.
Or you could understand the tool you are using and be skeptical of any of its output.
So many people just want to believe, instead of the reality of LLMs being quite unreliable.
Personally it's usually fairly obvious to me when LLMs are bullshitting probably because I have lots of experience detecting it in humans.