And all of those things (good at, bad at, the lessons learned on current models current implementation) can change arbitrarily with model changes, nudges, guardrails, etc. Not sure that outsourcing your skillset on the current foundation of sand is long term smart, even if it's great for a couple of months.
It may be those un-learning the previous iteration interactions once something stable arrives that are at a disadvantage?
Why would the AI skeptics and curmudgeons today not continue to dismiss the "something stable" in the future?
The tools have been very stable for the past year or so. The biggest change I can think of is how much MCP servers have fallen off. I think they’re generally considered not worth the cost in context tokens now. The scope of changes needed to unlearn now with model changes or whatever else is on par with normal language/library updates we’ve been doing for decades. We’ve plateaued and it’s worth jumping now if you’re still in the fence.