"AI can hallucinate on any data you feed it, and it's been proven that it doesn't summarize, but rather abridges and abbreviates data."
Have you ever met a human? I think one of the biggest reasons people become bearish on AI is that their measure of whether it's good/useful is that it needs to be absolutely perfect, rather than simply superior to human effort.
Right now AI is inferior, not superior, to human effort. That's precisely why people are bearish on it.
I'm not saying it needs to be perfect, but the guy in this article is putting a lot of blind faith in an algorithm that's proven time and time again to make things up.
The reason I have become "bearish" on AI is because I see people repeatedly falling into a trap of believing LLMs are intelligent, and actively thinking, rather than just very very fine tuned random noise. We should pay attention to the A in AI more.