Feels like a false equivalency. It's just my experience, but I've completely ignored crypto and the metaverse, and I don't get the sense I'm missing out on much. In contrast, LLMs in their current state have (for me) dramatically reduced the distance between an idea and a working implementation, which has been legitimately transformative in my software dev life. Transformative for the better? Time will tell I suppose, but I'm really enjoying it so far.
There's value in being early - in the right thing.
- If you'd invested in Bitcoin in 2016, you'd have made a 200x return
- If you'd specialized in neural networks before the transformer paper, you'd be one of the most sought-after specialists right now
- If you'd started making mobile games when the iPhone was released, you could have built the first Candy Crush
Of course, you could just as well have
- become an ActionScript specialist as it was clearly the future of interactive web design
- specialized in Blackberry app development as one of the first mobile computing platforms
- made major investments in NFTs (any time, really...)
Bottom line - if you want to have a chance at outsized returns, but are also willing to accept the risks of dead ends, be early. If you want a smooth, mid-level return, wait it out...
It's a horrifying feeling facing the possibility that the career I spent so much time and money to get into is fading away. Sure, LLMs are not there yet, and they might not ever quite get there. But will companies start hiring again? If productivity has gone up, and it seems like it has, then no.
So, a decade of hanging by a thread, getting by and doubling down on CS, hoping that the job market sees an uptick? Or trying to switch careers?
I went to get a flat tire fixed yesterday and the whole time I was envious of the cheerful guy working on my car. A flat tire is a flat tire, no matter whether a recession is going on or whether LLMs are causing chaos in white collar work. If I had no debt and a little bit saved up I might just content myself with a humble moat like that.
I actually think the opposite approach might be the most optimal one, at least from a monetary perspective. That is, be on the cutting-edge of something, but be willing to bail out at the moment its future starts seeming questionable. Or even more specifically, maximize your foothold in it while minimizing your downside.
Bitcoin is a good example: if you bought it 15 years ago and held it, you're probably quite wealthy by now. Even if you sold it 5 years ago, you would have made a ton of money. But if you quit your job and started a cryptocurrency company circa 2020, because you thought crypto would eat the entire economic system, you probably wasted a lot of time and opportunities. Too much invested, too much risked.
AI is another one. If you were using AI to create content in the months/years before it really blew up, you had a competitive advantage, and it might have really grown your business/website/etc. But if you're now starting an AI company that helps people generate content about something, you're a bit late. The cat is out of the bag, and people know what AI-speak is. The early-adopter advantage isn't there anymore.
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
I saw meme in X the other day, which roughly says that one does not have to learn if she learns slow enough in the age AI. I guess the undertone is that AI evolves faster than one can learn about the tricks of using it.
Agree with the message, coding since 1986, I have learned to not suffer from FOMO and wait for the dust to settle.
Ironically one might even get projects to fix the mess left behind, as the magpies focus their attention into something else.
In the case of AI, the fallacy is thinking that even if ridding the wave, everyone is allowed to stay around, now that the team can deliver more with less people.
Maybe rushing out to the AI frontline won't bring in the interests that one is hoping for.
EDIT: To make the point even clearer, with SaaS and iPaaS products, serverless, managed clouds, many projects now require a team that is rather small, versus having to develop everything from scratch on-prem. AI based development reduces even further the team size.
On the otherhand, when Cloud Computing started to come in, I knew a bunch of sysadmins. Some were in the "it'll never take off" camp and no doubt they know it now, kicking and screaming.
But the curious early adopters were the ones best positioned to be leading the charge on "cloud migration" when the business finally pulled the trigger.
Similarly with mobile dev. As a Java dev at the time that Android came along, I didn't keep abreast of it - I can always get into it later. Suddenly the job ads were "Android Dev. Must have 3 years experience".
Sometimes, even just from self-interest, it's easier to get in on the ground floor when the surface area of things to learn is smaller than it is to wait too long before checking something out.
For me, it's beyond doubt these tools are an essential skill in any SWE's toolkit. By which I mean, knowing their capabilities, how they're valuable and when to use them (and when not to).
As with any other skill, if you can't do something, it can be frustrating to peers. I don't want collegeues wasting time doing things that are automatable.
I'm not suggesting anyone should be cranking out 10k LOC in a week with these tools, but if you haven't yet done things like sent one in an agentic loop to produce a minimal reprex of a bug, or pin down a performance regression by testing code on different branches, then you could potentially be hampering the productivity of the team. These are examples of things where I now have a higher expectation of precision because it's so much easier to do more thorough analysis automatically.
There's always caveats, but I think the point stands that people generally like working with other people who are working as productively as possible.
A lot of people feel this way.
But IMO the most fruitful thing for an engineering org to do RIGHT NOW is learn the tools well enough to see where they can be best applied.
Claude Code and its ilk can turn "maybe one day" internal projects into live features after a single hour of work. You really, honestly, and truly are missing out if you're not looking for valuable things like that!
I've noticed 2 very obvious trends. The high-performers who previously wrote the most PRs are now using AI the most aggressively and the number PRs they produce has almost doubled. The other trend is that large teams are being split into smaller teams that work on new products. I hope the trend is more projects and less grunt coding. (I know the ai coders are not yet L6 but I suspect they will achieve L3-L5 this year).
>I didn't use Git when it first came out.
This really hinges on what you mean by "didn't use git".
If you were using bzr or svn, that's one thing.
If you were saving multiple copies of files ("foo.old.didntwork" and the like), then I'd submit that you're making the point for the AI supporters. I consulted with a couple developers at the local university as recently as a couple years ago who were still doing the copy files method and were struggling, when git was right there ready to help.
That's a reasonable strategy. I don't think spreading FOMO is good. But pragmatically, I enjoy working with the latest crop of AI models regarding all sorts of computer tasks, including coding but many other sysadmin stuff and knowledge organization.
I didn't pick them up until last November and I don't think I missed out on much. Earlier models needed tricks and scaffolding that are no longer needed. All those prompting techniques are pretty obsolete. In these 3-4 months I got up to speed very well, I don't think 2 years of additional experience with dumber AI would have given me much.
For now, I see value in figuring out how to work with the current AI. But next year even this experience may be useless. It's like, by the time you figure out the workarounds, the new model doesn't need those workarounds.
Just as in image generation maybe a year ago you needed five loras and controlnet and negative prompts etc to not have weird hands, today you just no longer get weird hands with the best models.
Long term the only skill we will need is to communicate our wants and requirements succinctly and to provide enough informational context. But over time we have to ask why this role will remain robust. Where do these requirements come from, do they simply form in our heads? Or are they deduced from other information, such that the AI can also deduce it from there?
I don't understand the rush to be "the first". Facebook isn't the first social media, Google isn't the first search engine, iPhone is not the first smart phone, Microsoft is not the first OS, the list goes on.
Clearly there's an advantage for being an early adopter, but the advantage is often overblown, and the cost to get it is often underestimated.
“There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero?”
This is a great framing.
In general, a good strategy is just staying a little bit behind. Let the new fads play themselves out. Some have staying power. Bitcoin never did turn into a usable currency, just another speculator's toy. Luckily I am - so far - in a position where I can watch the AI thing from the sidelines to see how it plays out.
My experience so far tells me that the default path with AI tooling is that it lets us create without learning. So the author is right in that they can pay for a seat in this revolution whenever they want.
A practitioner with more experience maybe a few percentage points more productive, but the median - grab subscription, get tool, prompt, will be mostly good enough.
The Programmer's Prayer
Nothing is happening. And if it is, it's just hype.
And if it isn't, it only works on toy problems. And if it doesn't, I'll learn it when it stabilizes.
And if I can't, the gains all go to owners anyway. And if they don't, it's just managers chasing metrics.
And if it isn't, well I'm a real programmer. And if I'm not, then neither are you.
I'm healthily skeptical of new technology. Meaning I'm not the early adopter. But I've also found over the years I don't get left behind. I become curious at the time things are stabilising. Maybe on the cusp where there's still a lot of pushback but there's also clear value. Crypto in 2014-2017. AI in 2023-2024. You don't have to feel FOMO but if you're a technologist, if you have a healthy desire to evolve, change and learn then you'll naturally pick things up. I went from total crypto skepticism in 2014 to investing most of what I had. I went from total AI skepticism to doing RAG for the Quran and agentic tech for the small web. I think there's value in staying true to who you are but also naturally discovering and learning on your own timeline.
If this is about vibe-coding.
I remember when React was the hotness and I was still using jQuery, I didn't learn it immediatley, maybe a couple years later is when I finally started to use React. I believe this delayed my chance in getting a job especially around that time when hiring was good eg. 2016 or so.
With vibe-coding it just sucks the joy out of it. I can't feel happy if I can just say "make this" and it comes out. I enjoy the process... which yeah you can say it's "dumb/waste of time" to bother with typing out code with your hands. For me it isn't about just "here's the running code", I like architecting it, deciding how it goes together which yeah you can do that with prompts.
Idk I'm fortunate right now using tools like Cursor/Windsurf/Copilot is not mandatory. I think in the long run though I will get out of working in software professionally for a company.
I do use AI though, every time I search something and read Google's AI summary, which you'd argue it would be faster to just use a built in thing that types for you vs. copy paste.
Which again... what is there to be proud of if you can just ask this magic box to produce something and claim it as your own. "I made this".
Even design can be done with AI too (mechanical/3D design) then you put it into a 3D printer, where is the passion/personality...
Anyway yeah, my own thoughts, I'm a luddite or whatever
I don't think the "craftsman" self-identification is going to work for software engineers anymore. The tool capabilities are too dynamic, you have to be some sort of opportunistic pirate/entrepreneur. Sure you can jump in and get up to speed on some aspect of the toolchain later on, but the identity shift is the hard and slow part that I think it's wise to get started on ASAP.
I am increasingly feeling okay with the idea of being left out. The worst parts of working professionally in a software development team have been amplified by LLMs. Ridiculously large PRs, strong opinions doubled down due to being LLM-"confirmed", bigger expectations coming from above, exceptionally unwarranted confidence in the change or approach the LLM has come up with.
I am dying inside when I make a comment and receive a response that has clearly been prompted toward my comment and possibly filtered in the voice of the responder if not copied and pasted directly. Particularly when it's wrong. And it often is wrong because the human using them doesn't know how to ask the right questions.
Fortunately, most of the fundamental technological infrastructure is well in place at this point (networking, operating systems, ...). Low skilled engineers vibe coding features for some fundamentally pointless SaaS is OK with me.
Crypto was interesting to think through and it was clear very early on how many flaws it has. It basically just moved the goal post to level deeper and it was quite an eye opener how few people even understood the major flaw of crypto: You can only do crypto savely with anything on blockchain and it has not solved any real issue off blockchain (which means you can literlay just send crypto to each other and thats it).
But AI is a beast.
Its A LOT to learn. RAG, LLMs, Architecture, tooling, ecosystem, frameworks, approaches, terms etc. and this will not go away.
Its clear today already and it was clear with GPT-3 that this is the next thing and in comparison to other 'next things' its the next thing in the perfect environment: The internet allows for fast communication and we never have been as fast and flexible and global scaled manufactoring than today.
Which means whatever the internet killed and changed, will happen / is happening a lot faster with ai.
And tbh. if someone gets fired in the AI future, it will always be the person who knows less about AI and knows less about how to leverage than the other person.
For me personally, i just enjoy the whole new frontier of approaches, technologies and progress.
But i would recommend EVERYONE to regularly spend time with this technology. Play around regularly. You don't need to use it but you will not gain any gut knowledge of models vs. models and it will be A LOT to learn when it crosses the the line for whatever you do.
I make my money cleaning up all the stupid fads. The tail end of the curve is profitable.
I agree with the sentiment of this article.
Sadly, I'm still disagreeing while crypto kiddies are driving past me in lambo's. If its the future of money, yes we'll get there eventually, but like every technology shift, there's a lot of money to be made in the transition, not after. *
* I sold all crypto a few years ago and I'm a happier person :D
> I wrote my MSc on The Metaverse. Learning to built VR stuff was fun, but a complete waste of time. There was precisely zero utility in having gotten in early.
Wonderful life lesson on hype cycles. I am curious if hype literacy will join media literacy in academia.
Many companies usually want to compare themselves to Apple and at the same time say they are disruptors and innovators but Apple is probably the best company at being okey with being left behind. Many think about them as experts in products but for me they always been best att copy what others are doing and refine it, maybe not neccassary better technical but always seen the market fit better then others. Like poker, the later you need to take your decision the more information you have.
I heard from a senior leader at Amazon that "Today, I am choosing how I fail". This has echoed in my head for many years.
At any moment, you are failing at thousands of things that you may not even know about, and that is the gist of what I took away from it. The thing is that you have to be OK when you intentionally choose to not invest in something as regret is ultimately a poison.
The other thing is this: you are not obligated to bring people with you and you have a choice of free association.
> For every HTML 2.0 you might have tried, you were just as likely to have got stuck in the dead-end of Flash.
i'll just say, and i understand this is not the point of the article at all, but for all its faults, if you got in on flash as earl as html 2.0 and you were staring at an upcoming dead-end of flash in say, 2009, you also knew or had been exposed at that time to plenty of javascript, e4x and what were essentially entirely clientside SPAs, providing you a sort of bizarro view of the future of react in a couple of years. honestly, not a bad offramp even if flash itself didn't make it.
I'm reminded of the parable of the Chinese farmer (a quick Google search if you aren't familiar) when I see this sentiment. Is going all in on crypto good or bad? Maybe so, maybe not. We'll see. Is going all in on AI-assisted development good or bad? Maybe so, maybe not. We'll see.
All I know is, I've always enjoyed building things. And I enjoy building things with AI-assisted tools too, so I'll continue doing it.
I remember working on a few early tools for VRML towards the end of the 90's... It was cool, but far from great... was remembered from the mention of VR in the article.
That said, my only regret with Bitcoin was deleting my early wallets when I realized the coins were only worth $.25 ... if I'd had any inkling what they'd be worth someday, I'd probably have just bought $1000 worth back then and zipped it up until closer to today. I'm truly curious how many bitcoins were similarly deleted from existence.
The biggest issue is - you will be left behind, in the end. This is the race you cannot win. You can try as much as you like, spend free time trying to catch up, and you, most likely, will lose. If you play this game, you've already lost.
I am actually surprised by people willingly trying to be more productive, like... machines. And then crying when machines are proven to be better at being machines than meatbags.
>There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate.
No, they are not.
Solid piece, very wise. AI is fun but where I find it useful with code is first writing more thorough comments and then writing the base of your tests.
Writing the actual code that's efficient is iffy at times and you better know the language well or you'll get yourself in trouble. I've watched AI make my code more complex and harder to read. I've seen it put an import in a loop. It's removed the walrus operator because it doesn't seem to understand it. It's used older libraries or built-ins that are no longer supported. It's still fun and does save me some time with certain things but I don't want to vibe code much because it removes the joy out of what you're doing.
One hidden premise of this is "AI tools are not useful now, even if they might be in the future." For example:
> Few are useful to me as they are now.
Except current AI tools are extremely useful and I think you're missing something if you don't see that. This is one of the main differences between LLMs and cryptocurrency; cryptocurrencies were the "next big thing", always promising more utility down the road. Whereas LLMs are already extremely useful; I'm using them to prototype software faster, Terrance Tao is using them to formalize proofs faster, my mom's using them to do administrative work faster.
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
In contrast to the current top comment [1], I don't think this is a wise assessment. I'm already seeing companies in my network stall hiring, and in fact start firing. I think if you're not trying to take advantage of this technology today then there may not be a place for you tomorrow.
I find it hard to empathise with people who can't get value out of AI. It feels like they must be in a completely different bubble to me. I trust their experience, but in my own experience, it has made things possible in a matter of hours that I would never have even bothered to try.
Besides the individual contributor angle, where AI can make you code at Nx the rate of before (where N is say... between 0.5 and 10), I think the ownership class are really starting to see it differently from ICs. I initially thought: "wow, this tool makes me twice as productive, that's great". But that extra value doesn't accrue to individuals, it accrues to business owners. And the business owners I'm observing are thinking: "wow, this tool is a new paradigm making many people twice as productive. How far can we push this?"
The business owners I know who have been successful historically are seeing a 2x improvement and are completely unsatisfied. It's shattered their perspective on what is possible, and they're rebuilding their understanding of business from first principles with the new information. I think this is what the people who emerge as winners tomorrow are doing today. The game has changed.
Speaking as an IC who is both more productive than last year, but simultaneously more worried.
I would not have started my article with "I could get into Bitcoin anytime, why the rush". That is not the killer first example you think it is. It's been ~17 years of proof so far that you would've made a ton of money by simply mindlessly buying $200 of bitcoin every month after lower risk contributions are made, and just holding onto it.
I mean if you did that you'd have contributed ~$38K USD by 2026 and had ~1.5B USD now if you started in 2010. BTC being so cheap back then dominates the whole process so to demonstrate my point more if you had heard about it all those years and were nervous about trying it and decided to wait until 2016, you'd still need to just put in $24K overall to come out with ~$450K by 2026.
That's not biting your finger nails over the price changes, the hype cycles, the price drop scares. You just set and forget a $200 recurring buy a month and put your energy elsewhere and pocket half a million for basically no effort
And if anything is possible in hindsight, then why in hindsight would you write an article acting like bitcoin was a bad decision to be an early adopter for
It's more about job seeking than anything. If you jump on a fad early, and it turns out to be the winner, when you're looking for work you can say you have X years of experience with it, which will be a few more than most of the other candidates.
It also shows a passion for learning and improvement, something hiring managers are often looking for signals of.
But of course it's a trade off. This rewards people who don't have family or other obligations, who have time to learn all the new fads so they can be early on the winners.
> There are a 16,000 new lives being born every hour. They're all starting with a fairly blank slate. Are you genuinely saying that they'll all be left behind because they didn't learn your technology in utero? > No. That's obviously nonsense.
That's does not obviously follow, I do worry about the ever increasing proportion of humanity who are no longer 'economically viable' and this includes people who are not yet born.
This is one of those posts I would like look back on in a year or two. I am usually a late adopter with everything. This time is think its different. I am seeing what AI can do with my own eyes. I am creating new things at light speed and figuring out this all works. I don't think you want to be late to the party on this one.
At some point you commit the time to learn what you need to. I like to think of the analogy to SEO. The veterans in the industry are not who they are because they were at the front of the line. It’s because they have the 15 years of experience under their belt.
Crypto isn't bad because it failed to make early adopters rich — it did make them rich. It's bad because it has horrible externalities in scams, war crimes / sanctions evasion, organised crime — which most of those early adopters were well aware of.
This is fine so long as you don't confuse stubbornness for caution. As technologies lose favor, and others suggested you expand your toolset, don't post about your frustration while you're standing in the unemployment line.
Devs who never mentored or never had to delegate/explain the work to be done to someone else, might be in for a rough first few weeks/months.
It is a skill, but not a special AI specific skill.
Ive been using AI/LLMs for 3 years non-stop and feel like I've barely scratched the surface of learning how to wield them to their full potential. I can’t imagine the mindset of thinking these tools don’t take extreme dedication and skill to master.
There’s a lot of truth to this post. I’m very pro AI, and I believe everyone should get comfortable with it because it’s not just the future, it’s already the present. If you want to stay competitive in today’s workforce, AI is going to be part of your toolkit.
But on the other hand... I also only learned git when I needed it at a new job... So we can pump the breaks a bit.
The last comment is ridiculous. Newborns have a literal lifetime to catch up—rather, they will learn what they need when they need it.
Say what you want but the layoffs of people who don’t use these tools getting replaced with those who can have and continue to happen; I miss manual coding just as much as the next man but this seems like a hot take
I started working in AI/ML about ten years ago. Reasonably early. Today, professionally and financially I'm doing about as well as a typical programmer. I find the field interesting so I have no regrets but I tend to agree with OP.
Guy who's ok with being left behind (crypto, AI) did a MSc on the metaverse. Sounds like he tried to go with the hype once, got burned.
> If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
Broadly speaking, I think this is a wise assessment. There are opportunities for productivity gains right now, but it I don't think it's a knockout for anyone using the tech, and I think that onboarding might be challenging for some people in the tech's current state.
It is safe to assume that the tech will continue to improve in both ways: productivity gains will increase, onboarding will get easier. I think it will also become easier to choose a particular suite of products to use too. Waiting is not a bad idea.