The article uses "why" in the title, but does not follow through with an answer.
It sort of hints at one reason:
> The most common errors of misunderstanding are either underestimation (“it’s all hype that will blow over”) or overestimation (“I don’t need programmers anymore”). These patterns are rooted in a lack of a solid understanding of the technology and how it is evolving over time.
So if you don't at least find some middle ground between those two poles, you will make uninformed choices.
But I agree: It is safe to ignore AI for now.
I do sense that some people attach to AI because of a fundamental anxiety that it might transform society quickly and detrimentally, because that's part of the hype speech ("it will murder us all, it will make us all unemployed, it will turn us into slaves, maybe you can ride the dragon, and maybe you must").
---
> AI has not meaningfully improved productivity
This is contended.
As the article says, we are in one of the most polluted information environments.
People will say "It's absolutely useless" and "It has fundamentally changed my life."
So neither extreme can be taken at face value as representative; they're samples of a murky picture.
> The field changes so fast that you could completely tune out
It's not that fast, in my opinion. Last big steps:
- Transformer architecture (2017)
- Larger models with greater performance (2020-)
- Chain of thought (research in 2022, commercial breakthrough in 2024)
- Agents (since forever, but 2022 for GPT-based agentic frameworks)
Other things happened; for example, DeepSeek making an architectural breakthrough and challenging the financial model of open/closed weights.But most of the hype is just people trying to make commercial success on a few cornerstone breakthroughs.
In one to two years, maybe we can add one more major advancement.