For any one who has not read the cockpit recording of air-france-447 I would encourage them to[1]. It is simply jaw dropping study in how things go wrong so fast — a risk with AI we have barely begun to acknowledge let alone regulate as a community.
[1](https://tailstrike.com/database/01-june-2009-air-france-447/)
The answer has always been the same: self-regulated profession and trade unions. Instead the ever efficient software engineers have efficiently dug their own grave. The regulated professions aren't going to be affected by the AI because their members understand that preservation of job security[0], their pay and QOL is more important than automating themselves out of existence.
[0] https://www.bma.org.uk/news-and-opinion/medical-degree-appre...
Everyday I sit down to build a product for my clients. I am a one man shop _now_. Before I had people helping me. My mental state is not good. A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with.
I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...
I really appreciate this series of posts, as it serves as a good summary of key points of the discourse around AIs, and links to the relevant articles etc. I find following all those discussions myself exhausting, so if I can find this all in one place and read it nicely grouped, that is very helpful.
Thank you for this aphyr.
My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.
Another interesting one from 'aphyr -- I think the points around the Ironies of Automation deserve deeper focus, possibly even a separate follow-up post.
I would encourage folks to look at the following industries: nuclear safety, commercial aviation, remote surgery. These industries have dealt with the issues of automation for much longer than we have as programmers.
In the research I've done, these industries went through a similar journey in the 20th century as we are now: once something becomes automated enough, the old way simply won't work. You have to evolve new frameworks and procedures to deal with it.
So in the case of aviation they developed CRM and SRM - how to manage the airplane as a crew and how to manage it as a solo operator. Remember that modern airplanes are highly automated!! The human pilot is not typically hands-on-wheel for most of the flight.
In the case of surgeons, they found that de-skilling without regular practice can occur in as little as four weeks! So to combat that, some surgeons are now required to practice in simulated environment to keep their skills sharp.
My feeling is that 'aphyr is right in the short-to-medium term. Current market forces and US regulatory posture (or lack thereof) makes it so that there are less rules and less enforcement. IMHO the results are depressingly predictable but the train has left the station with enough momentum that there's no stopping it. If we survive long enough to make it past the medium-term things will change.
Great article, near the end it talks about where the money go and if there will be universal basic income. I think those paragraph had an assumption that if models get very smart all the money will go to big tech.
But, thanks to all the companies working on open-weight models, I'm starting to think this might no longer happen. Currently open-weights models are said to be just months behind the top players (and I think we should really try to do what we can to keep it that way).
I'm wondering what the predictions would be in the case where AI becomes very powerfull, but also models are generally available.
Two possibilities come to mind, the first one where all the money no longer spent on employment would go towards hardware. New hardware manufacturers or innovators could jump in and create a bit more employment, but eventually it would probably all progress in one direction, which is the only finite resource in the chain, the materials/minerals needed for the hardware. Those materials might become the new "petrol". It's possible that eventually we would have build enough chips to power all the AI we need without needing more extraction, but I wouldn't underestimate our ability to waste resources when they feel aboundant.
In the second possibility, alongside a very powerful open-weight LLM, there could be big performance advancements, which would make the hardware no longer the bottleneck. But I'm struggling to imagine this scenario, maybe we would all be better off? Maybe we would all just be deppressed because most people won't feel "usefull" to society or their peers anymore?
> I can imagine a future in which some or even most software is developed by witches, who construct elaborate summoning environments, repeat special incantations (“ALWAYS run the tests!”), and invoke LLM daemons who write software on their behalf.
This sort of prompting is only necessary now because LLMs are janky and new. I might have written this in 2025, but now LLMs are capable of saying "wait, that approach clearly isn't working, let's try something else," running the code again, and revising their results.
There's still a little jankiness but I have confidence LLMs will just get better and better at metacognitive tasks.
UPDATE: At this very moment, I'm using a coding agent at work and reading its output. It's saying things like:
> Ah! The command in README.md has specific flags! I ran: <internal command>. Without these flags! I missed that. I should have checked README.md again or remembered it better. The user just viewed it, maybe to remind me or themselves. But let's first see what the background task reported. Maybe it failed because I missed the flags, or passed because the user got access and defaults worked.
AI is already developing better metacognition.
> One of her key lessons is that automation tends to de-skill operators
I recently discovered an example of this phenomenon in a completely unrelated area: navigation. About a week ago, I realized that I couldn't remember the exact turns to reach a certain place I started driving to recently, even after having driven there about 3-4 times over a period of a month. Each time I had used Google Maps. When I used to drive pre-Google-Maps, I would typically develop a good spatial model of a route on my third drive. This skill seems to have atrophied now. Even when I explicitly decide to drive without Google Maps, and make mental notes of the turns, my retention of new routes is now much weaker than it used to be. Thankfully, routes I retained before becoming Google Maps dependent, are still there.
I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well.
It feels like hexing the technical interview come to real life ;)
> Machine learning seems likely to further consolidate wealth and power in the hands of large tech companies
Only if you let it. You can own the means of production. I self host my daily driver LLMs in hardware in my garage.
Never given money to an LLM provider and never will. I only do work with tools I own.
Programming is indeed becoming witchcraft, with LLMs it is of the utmost importance that you chose the right database administrator.
For example I'm now relying on Soteria, the greek goddess of safety, salvation and preservation from harm to act as my database administrator.
Every time I hear of a hallucinogenic AI event, I am reminded of what happens often with synthesizers, as in the musical variant - an instrument, set up for musicality, creativity, and exploration which - in a mere glance of a finger tip upon a delicately balanced knob - can turn immediately into ear-splitting terror and calamity, if one is .. you know .. not too careful.
We have to remember that the results of our prompting is a synthesis, formed on the mass psychosis of a humanity which is simultaneously capable of being completely and utterly heinous to each other, and gloriously noble and kind as well - with nought but a stray new word and a thousand old forgotten to keep us all together or not.
In any case, all culture is a lie, which only persists in the re-telling. The past is a lie, too, somehow, someday, forgotten the day nobody remembers it. Hope you make some tunes into the winds and they echo on forever. And by you, I mean, not an AI/ML-based entity, but rather, the source of all lies, the human soul itself.
In the case of UBI, how would we differentiate between a previously highly paid professional (SWE, lawyer, author) and a pauper (janitor, car washer, unemployed).
It’s only fair that they would receive the same amount. But then how can the former category continue to fulfill their obligations?
previously: https://news.ycombinator.com/item?id=47754379
i respect the author of this post wayyyy too much to ever imply that i know more than them, or that i even have proprietary knowledge that they, themselves do not possess. i admire aphyr, and i aesthetically agree with many of the arguments offered forth. but this whole thing feels a bit cherry-picked— i’m not gonna go chapter-and-verse (cf. belt-and-suspenders) about it, but on some levels this comes across as a bit superficial. i think the general thrust— that ai is a sort of Narcissus’s pond— is completely a reasonable and well-considered take. but i would be shocked if someone with the intellectual powers of someone like Aphyr has never had an interaction with an ai in which they did not feel like they were interacting with the deep recesses of their mind in a way both profound and, more importantly, productive. and yeah, there’s plenty of pyrite in them thar hills. but, it does have this almost Lord of the Rings The One Ring -esque pull when you get into a certain “embedding space” (/ thought space) in a certain thread conversing with ai. it genuinely is a profound transformation of cognition, and working superlinearly productively with it is a matter of “when”, not “if”. i share all the same aeathetic concerns, and all the deeper ones. but there have been sessions that i have had with ai that made me blankly stare up at the heavens as well, and i don’t think i’m anywhere near the only one.
> This feels hopelessly naïve. We have profitable megacorps at home, and their names are things like Google, Amazon, Meta, and Microsoft. These companies have fought tooth and nail to avoid paying taxes (or, for that matter, their workers). OpenAI made it less than a decade before deciding it didn’t want to be a nonprofit any more. There is no reason to believe that “AI” companies will, having extracted immense wealth from interposing their services across every sector of the economy, turn around and fund UBI out of the goodness of their hearts.
> If enough people lose their jobs we may be able to mobilize sufficient public enthusiasm for however many trillions of dollars of new tax revenue are required. On the other hand, US income inequality has been generally increasing for 40 years, the top earner pre-tax income shares are nearing their highs from the early 20th century, and Republican opposition to progressive tax policy remains strong.
I think we are in general a highly naive, gullible class of people: we were conditioned, programmed and put into environments where being this was the norm and rewarded. The leaders and those extracting resources, who we gullibly allow to trample over our dignity and our rights, take advantage of this and reinforce it through lobby and influence of the mainstream culture and media campaigns around us. Further, if social media becomes a threat to their statuses, they have been shown to employ their influence there too through censorship and more; we therefore, may be best to learn how to not to be gullible and grow some balls.
Is anyone else just getting this?
<h1>Unavailable Due to the UK Online Safety Act</h1>I wonder if vibe coding is partly what happens when software engineering fails to converge on reusable abstractions. Instead, we got fragmented tools and endless reinvention of the same components, and LLMs arrived as an ad hoc abstraction layer on top.
Omnissiah-bothering, I call it.
> I continue to write all of my words and software by hand, for the reasons I’ve discussed in this piece—but I am not confident I will hold out forever.
There it is, an actual em-dash in the wild, written by hand.
I really wish we'd stop arguing about AI with an "some automation failed, so all automation is bad" approach.
Yes, AF447 crashed due to lack of training for a specific situation. And yet, air travel is safer than ever.
Yes, that Tesla drove into a wall, and yet robotaxis exist, work well, and are significantly safer than human drivers.
Yes, there are a lot of "witchcraft" approaches to working with AI, but there are also significant accelerations coming out of the field that have nothing to do with AI.
Yes, AI occasionally makes very stupid mistakes - but ones any competent engineer would have guardrails in place against.
And so a lot of the piece spends time arguing strawmen propped up by anecdotes. And that detracts from the deeply necessary discussion kicked off in the second part, on labor shock, capital concentration, and fever dreams of AI.
The problem of AI isn't that it's useless and will disrupt the world. It's that it's already extremely useful - and that's the thing that'll lead to disrupting the world.
"Another critical lesson is that humans are distinctly bad at monitoring automated processes".
Humans are also distinctly bad at noticing certain kinds of bugs in software. Think off-by-one errors, deadlocks, or any sort of bug you've stared at for days and not noticed the one missing or extra semicolon. But LLMs can generate a tsunami of subtly wrong code in the time a reviewer will notice one typo and miss all the rest.
The comparison with sociopaths is a good one. On the surface all human behavior, but if you lift the veil even a little bit it becomes clear there's no substance, no conscience, etc.
Read up on Cluster B personality disorders (borderline, narcissism, sociopaths/psychopaths) and you see the similarities. Love bombing, gaslighting, a shared fantasy, etc. It's very interesting and scary at the same time.
Wow the typography is obnoxious on mobile, some lines only have 3 words due to the text justification
>> Imagine a co-worker who generated reams of code with security hazards, forcing you to review every line with a fine-toothed comb. One who enthusiastically agreed with your suggestions, then did the exact opposite. A colleague who sabotaged your work, deleted your home directory, and then issued a detailed, polite apology for it. One who promised over and over again that they had delivered key objectives when they had, in fact, done nothing useful. An intern who cheerfully agreed to run the tests before committing, then kept committing failing garbage anyway. A senior engineer who quietly deleted the test suite, then happily reported that all tests passed.
>> You would fire these people, right?
Okay, now imagine a different colleague. One who writes a solid first draft of any boilerplate task in seconds, freeing you to focus on architecture instead of plumbing. A dev who never gets defensive when you rewrite their code, never pushes back out of ego, and never says "that's not my job." A pair programmer who's available at 3 AM on a Sunday when prod is down and you need to think out loud. One who remembers every API you've forgotten, every flag in every CLI tool, every syntax quirk in a language you use twice a year, or even every day.
You'd want that person on your team, right? In fact, you would probably give them a promotion.
Here's the thing: the original argument describes real failure modes, but then commits a subtle sleight of hand. It personifies the tool as a colleague with agency, then condemns it for lacking the judgment that agency implies. But you don't fire a table saw because it doesn't know when to stop cutting, right? You learn where to put your hands.
Every flaw in that list is, at the end of the day, a flaw in the workflow, not the tool. Code with security hazards? That's what reviews are for. And AI-generated code gets reviewed at far higher rates than the human code people have been quietly rubber-stamping for decades. Commits failing tests? Then your CI pipeline should be the gate, not a promise. Deleted your home directory? Then it shouldn't have had the permissions to do that in the first place. In fact, the whole "deleted my home directory" shit is the same thing as "our intern deleted the prod database". We all know that the response to the latter is "why did they have permission to prod in the first place??" AI is the same way, but for some god damn reason people apply totally different standards to it.
Does Aphyr give himself a limit of 6 semicolons ? If their editor returns, will this count drop to 0?
(And before anyone brings pitch forks out, this is what they wrote in a previous article:
> “Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.
My life was made poorer for knowing that semicolons are apparently a sin, but richer for the rebellion.
[dead]
[dead]
This has been on the front page for over a week in different forms what gives?
No you don’t have to review every single line of code produced by AI in fears of security. This is quite exaggerated and I think the author is biased in his own field.
> more like witchcraft than engineering
Welcome to web development buddy
> how ML might change the labor market
Human labor is expensive. If LLMs do make things cheaper and faster to produce, you don't need that many humans anymore. Again, assuming the improvement is real, there absolutely will be shrinkage for existing businesses in headcount. What remains to be seen is how much cheaper machines make work. 1.5x? 2x? 10x? 100x?
> unlike sewing machines or combine harvesters, ML systems seem primed to displace labor across a broad swath of industries [...] The question is what happens when [..] all lose their jobs in the span of a decade
It's more like hand tools -> power tools; a concept applied to many things. Everyone will adopt them, and you'll need fewer workers who'll work faster with less skill. You get a gradual labor force shrinkage, but also an increase in efficiency, so it's not like a hole is opening up in your economy. A strong economy can create new jobs, from either private or public sources.
> ML allows companies to shift spending away from people and into service contracts with companies like Microsoft
The price of hardware, as it always has been, is a downward trend, while the efficiency of open weights is going up (it will plateau eventually but it's still going up). We already spend $20,000 on servers, whether it's buying them once on-prem, or renting them out in AWS. ML is just another piece of software running on another piece of hardware
> if companies are successful in replacing large numbers of people with ML systems, the effect will be to consolidate both money and power in the hands of capital
That ship left port like 30 years ago dude. Laborers have no power in the 21st century.
The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.
If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.
If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.