Ok, here is the risk of being left behind - if we have moderately fast take-off, the 1-2 years required to upskill in AI might mean you find yourself unemployable when your role gets axed.
I don’t think folks are taking seriously the possible worlds at the P(0.25) tail of likelihood.
You do not get to pick up this stuff “on a timescale of my choosing”, in the worlds where the capability exponential keeps going for another 5-10 years.
I’m sure the author simply doesn’t buy that premise, but IMO it’s poor epistemics to refuse to even engage with the very obvious open question of why this time might be different.
Eh, I'm not super worried. After all, Every six months or so, the latest model changes everything and the former model was complete garbage. It's not just a new model—it's a new paradigm shifting the landscape of agentic development.
I don't think there's such a thing as a "fast take-off" where human experience with 2026-era LLM coding remains economically relevant.
[dead]
But they have engaged with it, and made an assessment about it's current utility.
We have no reason to believe that they won't keep an eye on this.
Little to nothing about AI tools so far suggests that that one can't just as easily pick the skills later. Tools that will get "exponentially better" will almost certainly be unrecognizable to someone desperately engaging with them now, for not other reason the sake of "having 1-2 years of experience"
Someone might reasonably choose to to bet on the upside. That doesn't imply that everyone else ought to fearfully hedge.