logoalt Hacker News

"People who don't use AI will be left behind"

137 pointsby speckxtoday at 7:15 PM169 commentsview on HN

Comments

jerhewettoday at 8:11 PM

Thank ghod I'm retiring in six months.

I'm very thankful I came of age during the golden age of personal computing. I was able to own my own computer(s) and earn a living writing software on them and for them. Fifty years was a good run, and I consider myself lucky to have participated in it.

IMO we've gone full circle: dumb terminals chained to mainframes and the whimsey of someone else's rules, restrictions, and rent-seeking, to my own bought-and-paid-for computer sitting on my desk that did exactly what I told it to do using software that never changed unless I wanted it to change, and now we're back to dumb terminals (browsers) that talk to mainframes (the cloud) that not only harvest and sell my personal information to the highest bidder but constantly change the rules and restrictions on my software and have gone back to renting me the software and pushing changes that I never asked for and never wanted in the first place.

I will never use spicy autocomplete for anything, and I find it depressing that people are being forced to use it in order to keep their job. I see a very dark future for computing if real skills are all replaced with garbage being vomited out by rules engines that harvested their "guess the next word" results from today's internet.

show 6 replies
sdevonoestoday at 8:03 PM

Any engineer (any person actually) can “learn to use AI” in a couple of days. It’s not rocket science; there’s no chance of left behind. If you haven’t use LLMs at all, a weekend would be enough to be on par with everyone else in the industry

show 6 replies
spamizbadtoday at 7:55 PM

The statement is absurd because the skill curve for AI tooling is so small you can can mess around for a day or two and get "caught up" with the zeitgeist. And what you need to know to get started is actually far less these days than it was 1.5 years ago thanks to all the product refinement that took place in the space.

The only real risk is that today there's an expectation from employers that you've got some AI experience under your belt you can articulate. But you can get that experience today.

show 2 replies
dudisubektitoday at 7:45 PM

Black-and-white thinking like this is not healthy.

You can still do creative thinking while using AI as a powerful tool at your disposal.

Some mathematicians like Terence Tao are comfortable doing this, for example.

show 6 replies
furyofantarestoday at 7:47 PM

Some people who don't use AI will be left behind - those who work on things where LLM's are capable of a substantial amount of the tasks will be left behind if they just refuse to leverage the superhuman properties that LLMs have.

I don't think it's hard to catch up if such a person changes their mind, though.

Some people who do use AI will be also left behind - those who use it to replace their skills without developing new ones themselves, and those who use it to do the same or worse work more cheaply. They will be left behind in a competitive world where others will work out how use it to do more or better work with no reduction in effort.

show 2 replies
tsukurimashoutoday at 7:53 PM

I agree with OP it's the other way around, while some will gradually lose basic skills by relying more and more on AI for productivity sake and laziness, those "people who don't use AI" value will go up by choosing to simply keep "learning the hard way"

abustamamtoday at 8:48 PM

> Why wouldn't you aim to be better, to learn how to be or do something that AI would never?

Because it doesn't make sense to be better than a tool. A woodworker could use a hand saw and take an hour to cut wood... Or he could use a buzz saw and cut it in a few minutes. Is the woodworker any less of a woodworker when he uses a buzzsaw vs a hand saw?

Outsourcing thinking to AI is not healthy, and certainly if everyone used AI like this we're doomed.

I still think it's true that those who don't use AI will be left behind, but it's a bit tautalogical because the thing they're behind left behind on is AI. A lot of the biggest companies on earth are putting a lot of money in AI, but if you're OK with working for a company that is not putting all their money in AI that's perfectly fine.

Just like block chain was everywhere ten years ago and now is just kinda _there_. If you got in before the hype you could have made a lot of money. If you didn't, you were left behind. I was left behind and I'm OK with that.

FiReaNG3Ltoday at 7:46 PM

Weird fallacy that if you use a tool you can't use your brain anymore

show 4 replies
mgaunardtoday at 7:57 PM

I find that good people get better with AI, but I'm not sure more average people really do.

I've seen some produce stuff without really understanding it, barely review anything, and pretty much suffer from imposter syndrome.

show 1 reply
lexandstufftoday at 8:33 PM

I have a feeling that a big risk of using AI all the time is that our own neurological capacity starts to dwindle.

Just as many people leading sedentary lifestyles have to make a deliberate effort to exercise, because inactivity is really bad for our bodies, I think we're going to realise that a similar process is necessary for our minds.

You really want to be spending a bit of time every day operating at your cognitive limits - trying to fully engage your System 2 - if you want to avoid brain atrophy. Coding used to kind of give you this exercise for free, but you can go really far with just your System 1 nowadays - literally get things done while scrolling Reddit.

I'm trying to allocate 30-60 minutes a day to doing something difficult, like writing code by hand for an unfamiliar problem or reading and summarising difficult papers without AI.

BadBadJellyBeantoday at 8:25 PM

People who can only use AI will be left behind. It is easy to shut off your brain when using AI and then get overwhelmed by the amount of code it produces. Worse is though when people replace programming experience with AI. I have seen a lot of really bad AI code. I can spot and repair it. Others can not. And that is a problem. And I am not talking about purist principles. I am talking about bad unoptimized code that I can spot with just one look.

It is a tool just like syntax highlighting, code completion and refactoring tools before it. You need to know how to use them, where their usefulness ends and you should probably have an idea how to do it yourself without the tool. It is okay if you will be less efficient, but it's bad if you just can't.

Muhammad523today at 7:44 PM

If that's the case, so be it.

fdsajfkldsfkldstoday at 8:44 PM

If your job can't easily be done by AI, then you can pick it up and get "up to speed" any time you like.

If it can be done by AI, then you have no hope of competing with the quantity of AI output that anyone can trigger in very little time.

As my job seems pretty secure, I can ignore AI for as long as I like.

hmokiguesstoday at 8:10 PM

This energy would be better directed anywhere else.

The author chose to take offense by connecting with a false dichotomy presented vaguely in a way that serves no purpose other than dividing and poorly labeling everyone in an area where much nuance applies.

I think this is perhaps a side effect of consuming too much content and feeling overwhelmed with it.

Engaging with stuff like this only amplifies its effects, how about do anything else instead? Maybe learn something new, like how to channel your anger.

gdullitoday at 7:48 PM

Trading practice of primary skills for indirect skills like AI is like a writer deciding they should stop writing directly and get really good at Microsoft Word.

show 1 reply
voidmaintoday at 7:48 PM

When on the road to hell, it's OK to be left behind.

show 1 reply
1vuio0pswjnm7today at 8:51 PM

Perhaps programming will once again become primarily a hobby

dollylambdatoday at 8:10 PM

My take on it is I would rather code than ask the machine to code. It's frustrating though how many open source projects now are overrun with massive PRs and nobody to code review them. This feels like fallout from too much reliance on AI.

docheinestagestoday at 8:34 PM

I think of AI as just another abstraction layer, somewhat similar to what high-level programming languages provide compared to writing machine code. Deciding how deep to understand the abstraction layers is a choice the user has to make, which could be optional if they don't really need to.

Nevertheless, the responsibility of whatever a human produces with AI is still on the human.

With that said, knowing how to use AI the way it's right for you can give you a huge advantage. You don't have to though. And there is not a standard way of doing it.

What I recommend to everyone is give it a try and see if and how it could help you. At the end, you have to make the decision based on your constraints and what you're aiming to and can sacrifice, including but not limited to speed, accuracy, learning, etc.

heliumteratoday at 7:52 PM

Just like every single trend that came before, they said you would be left behind:

If you didn't embrace OOP Test driven development Behavior driven development Events driven development Pants in head driven development SOLID DRY Cloud first Virtualization everything Microsservices Serverless Everything js Everything ts Everything Microsoft

This will never stop.

You either let someone be in the middle of you and what you want to accomplish, or you will be left behind.

Think about the most mediocre person you know. Now remember 50% of people around you is dumber than that

louskentoday at 8:09 PM

AI was always slowing me down, only recently it has become somewhat useful.

_doctor_lovetoday at 7:56 PM

> People who rely on AI are the ones who will be left behind. They'll forget how to think, how to write, how to do a simple reliable search, how to tell fact from fiction... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.

We could replace "AI" here with many different terms and the argument would remain unchanged.

Sadly I think the author is wrong though I agree with the spirit of what they're saying.

Economic pressures will force workers to use AI as part of their work. Categorically refusing to use AI under any circumstances will guarantee being left behind.

Like others are pointing out, if we define "using AI" as "outsourcing all your thinking to AI" then yeah, those people will perhaps not do well...or will they?

Most people consider quality work a hassle. It takes a long time and it gets in the way of shipping. I've worked with quite a few people who were lousy engineers but boy could they ship. They were universally beloved by the business and tolerated/loathed by the engineering side. But they're the ones who get promoted and get ahead.

Life is hard, but at least on the other hand, it's also unfair.

skwirltoday at 8:01 PM

This is a HN comment reply masquerading as a novel submission.

akomtutoday at 8:44 PM

If the crowd is running towards a cliff, I'd rather be left behind.

stevenoutoday at 7:57 PM

I think not using AI is a manifestation of one's inability or unwillingness to LEARN. To your point, if you can't learn, you will fall behind.

show 3 replies
simonwtoday at 7:46 PM

> "People who don't use AI will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.

> [...] they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to learn stuff.

I love learning. My life of self-education is so much richer with LLMs to help me.

There are dozens of other arguments for not engaging with AI. If your reason is "I love learning" I recommend at least dipping your toes in before you declare that AI is a hindrance, not a help, to people who love to learn new things.

show 1 reply
Tade0today at 8:12 PM

Friendly reminder that we're still in the hype phase, even if it's the late stages.

To me the idea that a GPU which costs as much as a car must read its entire VRAM just to output a word sounds incredibly wasteful. I'm exaggerating here, but it is literally reading gigabytes of data and processing it to produce relatively little information.

Some data is truly worth the effort, but the majority won't be able to afford this long term - especially when those who capture the market increase prices.

show 1 reply
yanis_ttoday at 8:06 PM

That reminds me of an old Fry and Laurie sketch.

Well of course too much is bad for you, that's what "too much" means you blithering twat. If you had too much water it would be bad for you, wouldn't it? "Too much" precisely means that quantity which is excessive, that's what it means. Could you ever say "too much water is good for you"? I mean if it's too much it's too much. Too much of anything is too much. Obviously. Jesus.

mempkotoday at 7:46 PM

The author makes a great point about learning. Learning is what increases your intelligence and if we substitute learning for AI lookup we will literally get dumber. That said, AI models have a lot of information and can assist in learning. It's a tool, how will people use it? My fear is they won't use it to help learn.

jojomoddingtoday at 7:47 PM

I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:

"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.

People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.

If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?

show 4 replies
dsiegel2275today at 8:05 PM

Both of these can be true.

And I'm sorry to nitpick - but "People who rely on AI are the ones who will be left behind" is NOT the opposite of "People who don't use AI will be left behind".

wesleywttoday at 8:02 PM

I disagree. People who use too much AI will not learn anything and will not contribute significantly to new developments.

show 1 reply
pannytoday at 7:41 PM

Yes, I will be left behind. Left behind with my copyrights,

https://news.ycombinator.com/item?id=47932937

Left behind with my money,

https://news.ycombinator.com/item?id=47933355

Left behind with my intact data,

https://news.ycombinator.com/item?id=47911524

Oh, the horror. I am being left behind.

show 1 reply
gedytoday at 7:51 PM

Maybe it's a generational thing, but I'm old enough to remember when personal and office computers were really hitting mainstream in the late 70s and 80s, the messaging was a lot more friendly and how they will save you time, help you, etc. Even though practically speaking it reduced a lot of manual jobs.

This AI/LLM push from leadership is so damn tone deaf, like "you better do this", "ai layoffs", etc. I feel like they are jumping way too hard and fast into the "post-employee" thinking and deserve every bit of scorn from laymen.

beastman82today at 7:44 PM

What's with all the anti-AI sentiment here? Is it a bunch of unemployed devs?

I think it's the greatest development in my lifetime, and I don't really worry about my skills atrophying. I worry about getting things done that are valuable.

I thought people here got excited about technology. Now it's just doomer spam. sigh

show 6 replies
r00t-today at 7:59 PM

"People who use a calculator will forget how to think"

feverzsjtoday at 7:56 PM

It's always obvious that LLMs are bullshit. It's blockchain, but far far worse. US invests too much in it and the collapse has already begun. Half of planned data center builds have been delayed or canceled across the country.

show 1 reply
freejazztoday at 7:57 PM

Doing fine so far, thanks!

beepbooptheorytoday at 7:59 PM

I just don't get even the presumed risk here. How can something be so revolutionary in its capacity to increase productivity but still so esoteric or specialized that there is a risk of being "left behind"? Like all these things people talk about are, at the end of the day, products that want you to use them; they aren't gonna make it hard for someone to onboard in the future. Sure if all coding became ecommerce overnight and I'd never "learned" Salesforce, there might be brief friction there, but I could still just, like, learn Salesforce. It's gonna be a lot easier than learning good software engineering in general.

Why spend your life "learning" something whose whole deal is about not needing to learn? Even if you gamble incorrectly, its not going to be hard to get into!

Like, what, if I don't start practicing now I am not going to be able to... express concepts with natural language as well?

jdw64today at 8:35 PM

In the 1950s, COBOL was introduced with the idea that programming could be written almost as if one were speaking English. But eventually people realized that \writing COBOL well, in a style that resembles English conversation, was itself difficult\.

Today, we are hearing a similar claim: “If you can describe the program in natural language, programming is basically finished.” But the industry is now discovering that \describing the program well\ is the hard part.

This is also why ideas like harness engineering are appearing: methods for controlling the range of outputs, from poor to excellent, that can emerge from minimal input.

And honestly, I do not think the “vibe coding” phenomenon is entirely bad. The essence of programming is automation. Many people were previously limited because they did not know programming languages. Now, through AI, they can express themselves and turn that expression into working apps. Seeing this, I understand how deeply people have wanted to create.

I write industrial software that runs in large factory environments, and because of the nature of my work, it is difficult for me to use AI directly. These environments are usually closed networks, so AI does not really benefit my own production work. Even so, I still defend AI, because it functions as a new kind of voice that allows more people to express themselves..

Of course, capitalism distorts this. Many people use AI to chase money and capital, and as a result, a lot of low-quality apps are being produced. But on the other hand, what is wrong with the motivation of wanting to make something one wants to make?

I have been studying the history of programming, and I like Dijkstra’s famous line:

> Computer science is no more about computers than astronomy is about telescopes..

To me, this means that computing is fundamentally about \automation\.

AI has existed as a research topic almost since the birth of computers. We tend to think of it as recent, but it is a field with a history of more than sixty years. Starting from early work such as the Perceptron, there have always been people claiming that AI was a fraud or an illusion.

But now a new seed has germinated. The amount of complexity that a single human can handle has increased. Historically, the techniques for managing that complexity were things like programming patterns and software architecture. And even people who strongly argued for software architecture also warned that if architecture becomes detached from code, then something has gone wrong.

Memes always damage the essence of ideas. As information circulates, it degrades, and eventually the original meaning disappears.

The Dunning-Kruger effect is a good example. The original paper was not simply saying, “ignorant people show off, while knowledgeable people do not.” It was more about how both less competent and more competent people can have difficulty accurately assessing their own metacognition. But the idea became distorted.

The same thing happens to many famous ideas in programming. Knuth’s statement about premature optimization is also constantly distorted as it circulates.

In that situation, can we really say it is always bad to step away from online communities and learn through AI while cross-checking against books?

When I see people making extreme claims about this, I sometimes find it absurd. Of course, many people may flag or downvote my comment. But this is how I see it.

black_13today at 7:59 PM

[dead]

chapztoday at 7:49 PM

"People who drive cars will forget how to walk".

show 3 replies
baddashtoday at 7:59 PM

this discussion is so stupid. no one who isn't a moron is offloading all work and thought to LLMs. no one who isn't a moron is seriously afraid of their thinking and learning skill "atrophying", whatever tf that means.

it's clear that LLMs are unique in that you actually do have the capability to turn your brain off and blindly trust whatever it does for you. but it should be equally clear that that's a stupid approach. people will still use their minds, and this use gets empowered with proper use of LLMs. it's that simple. ffs, we take the fact that they pass the Turing Test routinely for granted now. let's not forget that this technology is legitimately incredible. it stands to reason that you are seriously handicapping yourself by not trying to use it.

show 1 reply
bluegattytoday at 8:10 PM

People not using AI will 100% get left behind as sure as those refusing to 'cars' or 'computers'.

There is absolutely not doubt; and it will be impossible to avoid as using 'plastic' or 'electricity'.

The narrow challenges of 'AI aided development' or 'AI aided creative work' are legitimate - that part is real and fair, but it'd be an over-statement to contemplate 'not using it'.

The cyclists who keep their muscles strong the 'hard way' ... will win the delivery war vs. cars!?

The carpenter who hammers every nail and saws every plank by hand 'the hard way' ... will win over the guys using power saws and nail guns!?

No - AI is changing the landscape.

What is 'hard and easy' are changing.

We won't need some skills, we will need others.

It maybe harder to maintain some critical skills, but the upside is obvious.

What is fundamentally missing from this treatise is that 'there is always a hard way'.

Personally - I have never been more 'cognitively overloaded' than ever. The AI 'amplifies' the depth of complexity one can reach, it's just at 1/2 a layer of abstraction above the code.

Driving a 'race car' at the highest speeds - is as challenging - and perhaps more so - than riding a horse.

The 'instinct to push back' is fair and there are innumerable legit criticisms ...

... but AI is just a new part of the stack and it will be as horizontally applied as 'software or the transistor' - it's not reasonable to think one could or should avoid it entirely.

show 2 replies
jesse_dot_idtoday at 8:08 PM

This is an abacus-to-calculator situation. Some people still use an abacus. The vast majority do not. It's wild living through one of these technological transitions. People just eschew all common sense and critical thinking as it relates to the adoption of new technologies.

If it's good, lots of people will use it commercially. If it's generationally good, everybody will use it commercially because commercial use is about competition. It either gets banned outright, like steroids, or — if it doesn't get banned — those who use it will have a clear advantage and that will lead to a very small number of people who don't use it (in business).

This is not really something that opinions are required for because if you think LLMs are going away, your opinion is historically incorrect. Things that reduce toil and increase output do not go away.