There's a couple of news stories doing the rounds at the moment which point to the fact that AI isn't "there yet"
1. Microsoft's announcement of cutting their copilot products sales targets[0]
2. Moltbook's security issues[1] after being "vibe coded" into life
Leaving the undeniable conclusion to be - the vast majority (seriously) distrusts AI much more than we're led to believe, and with good reason.
Thinking (as a SWE) is still very much the most important skill in SWE, and relying on AI has limitations.
For me, AI is a great tool for helping me to discover ideas I had not previously thought of, and it's helpful for boilerplate, but it still requires me to understand what's being suggested, and, even, push back with my ideas.
[0] https://arstechnica.com/ai/2025/12/microsoft-slashes-ai-sale...
[1] https://www.reuters.com/legal/litigation/moltbook-social-med...
It seems pretty hard to say at this point—we have people who say they get good results and have high standards. They don’t owe us any proof of course. But we don’t really have any way to validate that. Everybody thinks their code is good, right?
Microsoft might just be having trouble selling copilot because Claude or whatever is better, right?
Moltbook is insecure, but the first couple iterations of any non-trivial web service ends up having some crazy security hole. Also Moltbook seems to be some sort of… intentional statement of recklessness.
I think we’ll only know in retrospect, if there’s a great die-off of the companies that don’t adopt these tools.
[dead]
"Thinking (as a SWE) is still very much the most important skill in SWE, and relying on AI has limitations."
I'd go further and say the thinking is humanity's fur and claws and teeth. It's our strong muscles. It's the only thing that has kept us alive in a natural world that would have us extinct long, long ago.
But now we're building machine with the very purpose of thinking, or at least of producing the results of thinking. And we use it. Boy, do we use it. We use it to think of birthday presents (it's the thought that counts) and greeting card messages. We use it for education coursework (against the rules, but still). We use it, as programmers, to come up with solutions and to find bugs.
If AI (of any stripe, LLM or some later invention) represents an existential threat, it is not because it will rise up and destroy us. Its threat lies solely in the fact that it is in our nature to take the path of least resistance. AI is the ultimate such path, and it does weaken our minds.
My challenge to anyone who thinks it's harmless: use it for a while. Figure out what it's good at and lean on it. Then, after some months, or years, drop it and try working on your own like in the before times. I would bet that one will discover that significant amounts of fluency will be lost.