LLMs are not on the road to AGI, but there are plenty of dangers associated with them nonetheless.
Just 2 days ago Gemini 2.5 Pro tried to recommend me tax evasion based on non-existing laws and court decisions. The model was so charming and convincing, that even after I brought all the logic flaws and said that this is plain wrong, I started to doubt myself, because it is so good at pleasing, arguing and using words.
And most would have accept the recommendation because the model sold it as less common tactic, while sounding very logical.
Agreed, broadly. I never really thought they were, but seeing people work on stuff like this instead of even trying to improve the architecture really makes it obvious.