I feel bad for people who reject LLMs on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
LLMs are very easy to pick up, the point of them for their makers is to commoditize skill and knowledge, you can't be left behind in learning to use them, AI providers don't have economic incentives to make them into anything other than appliances.
The people more at risk of being left behind are the ones that don't learn when not to trust their output.
I don't necessarily agree with the LLM moral objection, but this point of view is unconvincing. Change the topic to say, slavery, and the "I feel bad for those who reject slavery on moral grounds, they'll fall behind..." argument becomes fairly absurd.
You're essentially saying the very concept of a moral objection is to be pitied. Maybe you believe that's true but I'd say that reflects poorly on our values today.
I feel bad for people who reject Windows 11 on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
https://shkspr.mobi/blog/2026/03/im-ok-being-left-behind-tha...
This is total FUD.
The goal that AI-Megacorp CEOs have been pushing lately is "super intelligence" and so if that's where you truly think we are rapidly heading, what's the risk for those of us not hyper-invested in AI? This "super intelligence" (by definition) will be able to understand us both equally well, so all these "prompting skills" people claim sets them apart from people who don't use AI that much will be utterly pointless.
This is just the typical FOMO nonsense pushed by AI fans.
It's the exact same as seen with many past hypes, and every time the result is a lot more nuanced than those fans claim. It wasn't that long ago that people were claiming MongoDB was going to revolutionize the world and make relational databases obsolete, or how cryptocurrencies were going to change the world, or NFTs, and the list goes on.
I feel bad for people who reject lying/stealing/cheating/corruption/backstabbing on moral grounds. They'll likely fall behind, while also having to live in a world increasingly built around something they see as immoral.
> They'll likely fall behind
So far this doesn't seem to be the case, despite it being repeated endlessly over the last few years.
>while also having to live in a world increasingly built around something they see as immoral
Should people just decide that things they think are immoral are actually fine and get over it? Doesnt really seem coherent...
Are the people who aren’t born or haven’t even entered a workforce also falling behind?
I feel bad for people who accept AI. They're going to wind up just as replaced by it as I will, but it will somehow come as a surprise to them despite the writing being on the wall for ages
I imagine there will be a lot of regrets in the future from people that were early adopters that eventually got pushed out by the AI they love so much
On the falling behind:
I strongly doubt that is going to be the case - picking up these tools is not rocket science, even if you want to be able to use them fairly effectively. In addition, there is so much churn in AI tooling these days that an early investment might not really be worth a lot in the longer run.
On the other hand, hands-on experience in programming and architecture is currently a must-have to use the tools effectively - and continuing without AI in the short term might just buy an inexperienced engineer some time to learn, and postpone skill atrophy for an experienced engineer.
Of course, who can know what the future looks like, but I doubt a "wait and see" approach is that dangerous to anyone's career.