As a pro, my argument is "it's good enough now to make me incredibly productive, and it's only going to keep getting better because of advancements in compute".
I'd rather get really good at leveraging AI now than to bury my head in the sand hoping this will go away.
I happen to agree with the saying that AI isn't going to replace people, but people using AI will replace people who don't. So by the time you come back in the future, you might have been replaced already.
Why would anything you learn today be relevant tomorrow if AI keeps advancing? You would need less and less of all your tooling, markdown files and other rituals and just let the AI figure it out altogether.
> I'd rather get really good at leveraging AI now than to bury my head in the sand hoping this will go away.
I don't think those are the only two options, though.
Further, "Getting really good at leveraging AI" is very different to "Getting really good at prompting LLMs".
One is a skill that might not even result in the AI providing any code. The other is a "skill" in much the same way as winning hotdog eating contests is a "skill".
In the latter, even the least-technical user can replace you once they get even halfway decent at min-maxing their agent's input (md files, although I expect we'll switch away from that soon enough to a cohesive and structured UI).
In the former, you had better find some really difficult problems that pay when you solve them.
Either way, I predict a lot of pain and anguish in the near future, for a lot of people. Especially those who expect that prompting skills are an actual "skill".
[dead]
it sure is possible that One person using AI effectively may replace 10 people like me. it is just as likely that i may replace 10 people who only use AI.