LLM AI has led to job losses (either indirectly for moving investments into AI instead of people) or directly. Generative imagery has and will lead to more bullshit election outcomes, people getting blackmailed, scamed, things like this train stoppage etc etc. The list is endless. Not even getting into how the AI bubble burst will make most of us poor when the huge stock market crash comes, but hey whatever...
What good has it brought us (not the billionaire owners of AI)? It made us 'more effective' and oh instead of googling something and actually going to a link reading in detail the result we can now not bother with any of that and just believe whatever the LLM outputs (hallucinations be damned).
So I guess that's an upside.
(before the AI god bros come: I am talking purely about LLMs and generative imagery and videos, not ML or AI used for research et al)
I belive you are right and in due time this will lead to people dying needlessly as demonstrated in this article in The Guardian: https://www.theguardian.com/society/2025/dec/05/ai-deepfakes... where in this case it was a scam for a "harmless" medication but serious disruptors could easily up the game. What stuck with me most from that article is that we have currently no means to enforce such things get taken down which prolongs the potential for real damage. And all of that for a little more "efficiency".
> LLM AI has led to job losses
I can confirm, the trend now in enterprise CMS deployments is to push for AI based translations, and image assets generation, only pinging back into humans for final touches, thus reducing the respective team sizes.
Another area are marketing and SEO improvements, where the deal is to get AI based suggestions on those improvements, instead of getting a domain expert.
Any commercial CMS will have these AI capabilities front and centre on their website regarding why to chose them.