The irony is that if LLMs live up to their potential then the value of software development as a skill is going to plummet, at least as far as something to do for others. I say it's ironic because obviously the people most interested in using LLMs for software development are software developers, and most are not working independently. It'd be like if we were all proactively getting involved in training our own replacements.
I was highly skeptical of this happening not that long ago, but I have to say that it seems increasingly likely. LLMs are still quite mediocre at esoteric stuff, but most software development work isn't esoteric. There's the viable argument that software development largely isn't about writing code, but the ability to write code is what justifies software developer salaries, because there's a large barrier to entry there that most just can't overcome. The 80/20 law seems to apply to everything, certainly here - 80% of your salary is justified from 20% of what you spend your time doing.
It's quite impossible to imagine what this will do to the overall market, because while this sounds highly negative for software developers, we're also talking about a future where going independent will be way easier than ever before, because one of the main barriers for fully independent development is gaps in your skillset. Those gaps may not be especially difficult, but they're just outside your domain. And LLMs do a terrific job of passably filling them in.
It'd be interesting if the entire domain of internet and software tech plummets in overall value due to excessive and trivialized competition. That'd probably be a highly disruptive but ultimately positive direction for society.
I started working in AI/ML about ten years ago. Reasonably early. Today, professionally and financially I'm doing about as well as a typical programmer. I find the field interesting so I have no regrets but I tend to agree with OP.
> weaponisation of FOMO
This is in an excellent characterization of the kind of marketing tactic I see all over social media right now and that I find absolutely disgusting.
The keyword here is fear. Despite faux-positive veneer, the messaging around certain technologies (especially GenAI) is clearly designed to induce anxiety and fear, rather than inspire genuine optimism or pique curiosity. This is significant, because fear is one of the most powerful tools to shut down rational thinking.
The subliminal (although not very subtle) message there is something very primitive. "If you don't join our group, you will soon starve to death." This is radically different from how most transformative technologies were promoted in the past.
Adoption of a new technology has always sorted itself into buckets by early adopters, mainstream adopters and late adopters. I think this post is just demonstrating the mindset of the latter.
I prefer to move slower. I've accepted that I'm not going to create some unicorn startup (that was never an aspiration). As an employee at a company, my goal is to focus my time learning the things that are relevant to my job and that will be useful for 10 years, not 10 weeks.
Chasing every new tech will lead to burnout and disillusionment at some point.
AI probably isn't going away in the same way NFTs largely did, and I use it to some degree. However, I don't see a lot of value of being on the bleeding edge of AI, as the shape it takes for those skills that will be used for the next 10 years are still forming. Trying to keep up now means constantly adapting how I work, where more time is spent keeping up on the changes in AI than actually doing something useful with it.
After the bubble pops, I think we'll start to see a much more clear picture of what the landscape of AI will look like long-term. Who are the winners, who are the losers, and what tools rise to the top after the hype is gone. I'll go deeper at that time.
Right now, the only thing I'm allowed to use at work is Copilot, so I just use that and don't bother messing around with much more in my free time.
I had this with Rust. I always saw the huge hype, especially some years ago, and it was hugely off-putting. Ridiculous projects like rewriting famously full coverage branch tested projects like SQLite in Rust, or rewriting the GNU coreutils, and always spamming "blazing fast" and "written in Rust (crab emoji)" was very, very hostile to a C++ developer.
When I eventually got around to using Rust, I was hooked, and now I don't use C++ anymore if I can choose Rust instead. The hype was not completely unjustified, but it was also misplaced, and to this day I disagree with most of those hype projects.
It was no issue to silently pick up Rust, write some code that solves problems, and enjoy it as a very very good language. I don't feel a need to personally contact C or C++ project maintainers and curse at them for not using Rust.
I do the same with AI. I'm not going around screaming at people who dare to write code by hand, going "Claude will replace you", or "I could vibe code this for 10 bucks". I silently write my code, I use AI where I find it brings value, and that's it.
Recognize these tools for what they are: Just tools. They have use-cases, tradeoffs, and a massive community of incompetent idiots who like it ONLY because they don't know better, not because they understand the actual value. And then there's the normal, every day engineers, who use tools because, and ONLY because, they solve a problem.
My advice: Don't be an idiot. It's not the solution for all problems. It can be good without being the solution to a problems. It can be useful without replacing skill. It can add value without replacing you. You don't have to pick a side.
WordStar for DOS was great! A lot better then my hand-writing. But still, I get the point. :-)
I agree with the conclusion but not with the premise. The conclusion is, "I don't have to be an early adopter," but the premise seems to be "there is zero utility in getting in on anything early."
It’s a personal choice, and both early and late(er) can be valid rational choices if it’s you who is making the choice and not just following a crowd (or even a single person).
As long as it's not coupled with calls to tax and regulate those who do get in early and reap benefits from doing so, this is good and healthy.
(I'm not the earliest adopter of crypto and AI by any means. I only rode up crypto a couple of times for 2X and 3X kinda gains on my investment, and I only started using Claude last year.)
FOMO is making me feel like I should mess around with openclaw but I can’t see any use cases that I can’t accomplish with other tools. What should I do based on this article?
sure you can pick up any tool whenever you want, but from your employer's perspective AI is the best force multiplier since slavery, everything between it still required humans with leverage, the question is if your boss will need you at all by then
Why launch Voyager-1 if, in X years, no matter how far it flies, we’ll catch up to it and overtake it using a new version?
the main value being created is by selling courses and convincing people they're late and need to catch up
To me the main question is the long term pricing.
It is said that major providers more than break even on what they're charging.
But at the same time that's not the point of capitalism, is it? The point is to charge close to the value you're providing.
My lunch money is approximately $10 and I often blow through as much in Claude tokens generously provided by the company which hired me. But I'm not getting $10 value from those tokens, but much more.
The cost of entry to this market is extremely high. Should Anthropic win and become an almost monopoly, it is bound to keep increasing prices to the point, where the value it's providing matches the cost.
That's the endgame of every AI company out there. It's worth using these tools now, while there's still competition and moats weren't established.
Say what you want but the layoffs of people who don’t use these tools getting replaced with those who can have and continue to happen; I miss manual coding just as much as the next man but this seems like a hot take
How else can you be in the right place and right time to discover a problem to solve that can’t be seen from afar?
I’m running a solo saas. In the last six months, I’ve added about $300k in ARR. There’s zero chance I could have done this without AI. My velocity just keeps going up, month after month.
> I'm OK being left behind, thanks!
> It is 100% OK to wait and see if something is actually useful.
> I took part in a vaccine trial
> Getting Jabbed With EXPERIMENTAL SCIENCE!
This is such a weird article. The author presents so many contradictory anecdotal experiences against the author's own conclusion.
As someone further down the road in my career, I would argue that waiting is your prerogative but you do so at your own peril.
I made these kind of mistakes early in my career, stuck it out with PHP for far too long ignoring all the changes with frontend design trends, react, etc. I was using jQuery far too late in my career and it really hurt me during interviews. What I was doing was seen as dated and it made ageism far worse for me.
Showing a portfolio website that was using tables instead of divs.
I had to rapidly skill up and it takes longer than you think when you stick too long with what works for you.
If AI truly is a nothing-burger than guess what? Nothing lost and perhaps you learned some adjacent tech that will help you later. My advice is to NEVER stop learning in this field.
Learning is your true superpower. Without that skill, you are a cog that will be easily replaced. AI has revealed to me who among my colleagues is curious, and a continuous learner. Those virtues have proven over the course of my 25+ year career in technology to be what keeps you relevant and marketable.
>what's the point of "getting in early"?
You're trying to make the point using BitCoin, but in the early 2000s I had just over 14,000 of them, so I can quite clearly see a point in getting in early.
I understand and empathize with his sentiment, but I think he is missing the point. Using AI effectively as an engineer requires a paradigm shift in terms of how you work. You cannot approach your work as you did in the past, and use AI and expect it to be a big improvement. In fact, if you do that you will likely be disappointed, and worse off. Shifting your paradigm is one of the hardest things you can do, even more so if you have been in the field for a while, but it is also the most rewarding, and opens up many new possibilities. It's not about being left behind, as much as it is about limiting yourself unnecessarily, by staying in your comfort zone.
Comparing these tools to the crypto or NFTs hype is so out of touch with reality.
This is more on the scale of the invention of the press, the telegraph, or the internet itself.
"I'm ok being left behind, I will join this Internet thing when it really becomes useful"...
Ok... you do you. Hope you don't get there too late.
I’m suprised nobody has mentioned that Claude is the realization of all this blockchain work - an internet computer you rent time from where the computation is measured in tokens :)
So? Economy is entertainment. When crypto was hype, billions were made and burned from building whatever entertaining thing around that. Now it's AI's turn. Billions will be made and burned. Economy is just a fun game. Let's have fun. The idea that everything needs to be "useful" is highly subjective. What is truly useful? Is it food? Shelter? Medicine?
> What is there to be left behind from?
Employment?
It is a far better investment to attend to the eternities rather than the times.
His ignorance is my first-mover opportunity!
Anyone obsessively insisting others adopt $tech with threats that you'll be obsolete, left behind, whatever, are just selling you something. If anything, they should be trying to keep it a secret, so that they stay among the elite few who get outsized benefits from $tech while everyone else plays in the mud.
I mean, whatever, man.
This line, as one example:
> For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.
Like a lot of tech Flash had its moment in the sun and then faded away, but that “moment” lasted a decade, and plenty of people got their start because of or built successful businesses around it. Did they have to pivot as Flash waned? Sure, but change is part of life.
I’m sorry but I find the take expressed in this piece to be absolutely miserable and uninspiring.
But, hey, congratulations on the 20:20 hindsight, I suppose.
Okay, this text was pretty good. Refreshing to read something that doesn't seem written by AI too (would be ironic given the contents).
The only scenario where I think it pays off to be on top of the hype is of you are chasing money sloshing around the latest hype. You know, the hustle culture thing. If that's not your thing, waiting until things are established (if they ever get there) is harmless.
And yeah, AI as it is now is at best moderately useful. I use it on a daily basis, but could do without it with little harm.
I'm upvoting because it's useful to see and debate this viewpoint — shared by many engineers I know
I do think it's a bad take though. Not all new trends are the same: the metaverse was an obvious flop and crypto hasn't found practical applications. AI isn't like those because it's already practically changed the way I get my job done.
It takes time to learn skills, and getting started earlier will means more time to use them in your working life.
i’ve said this before, but the “left behind” narrative is FUD nonsense. as an llm avoider i’ve never felt further _ahead_ than now. all of my peers who never bothered to learn their tools (which gave tangible benefits) have opted into deskilling themselves further.
it’s readily apparent who has bought into the llm hype and who hasn’t
It's a problem of motivation, all right? Now if I work my ass off and Initech ships a few extra units, I don't see another dime, so where's the motivation? And here's something else, Bob: I have eight different bosses right now.
It amazes me that companies are developing proprietary IP with somebody else's cloud-based AI that ingests and learns from everything that they type and it generates.
These companies are paying for the privilege of having their IP stolen.
When did ignorance become a virtue? Or is it the contrarian montra?
This reasoning is solid and applies equally to AI. I do not need it crammed into every service and forced on me thank you very much "If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours."
some people are not ok, some people lose their jobs and suffer because they are too complacent and it's too uncomfortable to adapt.
This is the lazy guy path, is not the wise one.
Getting early into any technology only makes sense if you are building your business on top of it. Or you are making money from it in some way. Other than that it makes sense for the rest of us to wait.
Of course those that believe that AI will convert into AGI and destroy society as we know it won't be convinced.
General idea is true, except for this particular technology.
When AI will be easy to pick up and guide, guess what, there will be no need for a programmer to pick it up. AI will be using itself, Claude Manager driving Claude programmers.
So leverage AI while you still can provide value doing so.
It's literally a "use it or lose it situation".
Whenever I hear "It's never too late to do X", I can't help but think "Well in this case, there is no harm in waiting a bit longer, is there?".
Wouldn't play that game with LLM's
I'm glad I missed: GraphQL, Kubernetes, Microservices, the Metaverse.
I'm glad I jumped early on: Linux, Python, virtualization, cloud, nodejs, Solana.
I wish I'd gotten into Rust and LLMs earlier.
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
I mean... yeah? It's obviously true. However people use LLM coding today not because they're "afraid of being left behind" or "investing into a new tech" or whatever abstract reasoning. It's because they're already reaping the benefit right away. It takes just a few hours to go through like 80% of the learning curve.
The thing is, Bitcoin, at least before cryptocurrencies were picked up by "tech bros", was originally a way to disconnect from the corrupt, centralized banking system.
LLMs, at the moment, are all about giving up your own brain and becoming fully dependent on a subscription-based online service.
Its interesting to see this author's historical takes about AI.
IMO it reads a little desperate and very much like the hype bros but from opposite side. Take a look at the articles if you don't believe.
https://shkspr.mobi/blog/tag/ai/
- I'm OK being left behind, thanks!
- Unstructured Data and the Joy of having Something Else think for you
- This time is different
- How close are we to a vision for 2010?
- AI is a NAND Maximiser
- Reputation Scores for GitHub Accounts
- Agentic AI is brilliant because I loath my family
- Stop crawling my HTML you dickheads - use the API!
- Removing "/Subtype /Watermark" images from a PDF using Linux
- LLMs are still surprisingly bad at some simple tasks
- Books will soon be obsolete in school
- Winners don't use ChatGPT
- Grinding down open source maintainers with AI
- Why do people have such dramatically different experiences using AI?
- Large Language Models and Pareidolia
- How to Dismantle Knowledge of an Atomic Bomb
- GitHub's Copilot lies about its own documentation. So why would I trust it with my code?
- LLMs are good for coding because your documentation is shit
I find the hivemind terribly oppressing at times. AI tools are great, but in the end it seems to me that the results matter most. However we seem to go from hype to hype, again and again. It's all so tiresome. Why can't we just respect individual choices and focus less on the tools and more on the results?
Ive been using AI/LLMs for 3 years non-stop and feel like I've barely scratched the surface of learning how to wield them to their full potential. I can’t imagine the mindset of thinking these tools don’t take extreme dedication and skill to master.