I am not sure why the OP is painting it as a "us-vs-them" - pro or anti-AI ? AI is a tool. Use it if it helps.
I would draw an analogy here between building software and building a home.
When building a home we have a user providing the requirements, the architect/structural engineer providing the blueprint to satisfy the reqs, the civil engineer overseeing the construction, and the mason laying the bricks. Some projects may have a project-manager coordinating these activities.
Building software is similar in many aspects to building a structure. If developers think of themselves as a mason they are limiting their perspective. If AI can help lay the bricks use it ! If it can help with the blueprint or the design use it. It is a fantastic tool in the tool belt of the profession. I think of it as a power-tool and want to keep its batteries charged to use it at any time.
AI is going to put a hold on the development of new programming languages for sure, since they won't be in the training set.
Great news if you know the current generation of languages, you won't need to learn a new one for quite some time.
UBI will never happen because the people in power done want it.
Who is going to control AI? The people in power obviously. The will buy all of the computers so running models locally will no longer be feasible. In case it hasn’t been obvious that this is already happening. It will only get worse.
They will not let themselves be taxed.
But who will buy the things the people in power produce if nobody has a job?
This is how civilization collapses.
I think best hope against AI is copy right. That is AI generated software has none. Everyone is free to steal and resell it. And those who generated have zero rights to complain or take legal action.
> But I'm worried for the folks that will get fired. It is not clear what the dynamic at play will be: will companies try to have more people, and to build more?
This is the crux. AI suddenly became good and society hasn't caught on yet. Programmers are a bit ahead of the curve here, being closer to the action of AI. But in a couple of years, if not already, all the other technical and office jobs will be equally affected. Translators, admin, marketing, scientists, writers of all sorts and on and on. Will we just produce more and retain a similar level of employment, or will AI be such a force multiplier that a significant number or even most of these jobs will be gone? Nobody knows yet.
And yet, what I'm even more worried about for their society upending abilities, is robots. These are coming soon and they'll arrive with just as much suddeness and inertia as AI did.
The robots will be as smart as the AI running them, so what happens when they're cheap and smart enough to replace humans in nearly all physical jobs?
Nobody knows the answer to this. But in 5 years, or 10, we will find out.
Ah yes, AI is so good that they had to break search engines to force people into using them
I'm sure it will go in the worst way possible: demand for code will not expand at nearly the same rate in which coding productivity will increase, and vast majority of coders will become permanently jobless, the rest will become disposable cheap labor just due to overabundance of them.
This is already happening.
AI had an impact on simplest coding first, this is self-evident. So any impact it had, had to be on the quantity of software created, and only then on its quality and/or complexity. And mobile apps are/were a tedious job with a lot of scaffolding and a lot of "blanks to fill" to make them work and get accepted by stores. So first thing that had to skyrocket in numbers with the arrival of AI, had to be mobile apps.
But the number of apps on Apple Store is essentially flat and rate of increase is barely distinguishable from the past years, +7% instead of +5%. Not even visible.
Apparently the world doesn't need/can't make monetisable use of much more software than it already does. Demand wasn't quite satisfied say 5 years ago, but the gap wasn't huge. It is now covered many times over.
Which means, most of us will probably never get another job/gig after the current one - and if it's over, it's over and not worth trying anymore - the scraps that are left of the market are not worth the effort.
This is making me sad. The people that are going to lose their jobs will be literally weaponized against minorities by the crooked politicians that are doing their thing right now, it's going to be a disaster I can tell. I just wish I could go back in time. I don't want to live in this timeline anymore. I lost my passion job before anything of it even happened. On the paper.
The reason I am "anti-AI" is not because I think LLMs being bad at what they do, nor because I'm afraid they'll take my job. I use CC to accelerate my own work (it's improved by leaps and bounds though I still find I have to keep it on a short leash because it doesn't always think things through enough). It's also a great research tool (search on steroids). It's excellent at summarizing long documents, editing and proofreading, etc. I use it for all those things. It's useful.
The reason I am anti-AI is because I believe it poses a net-negative to society overall. Not because it is inherently bad, but because of the way it is being infused into society by large corps (and eventually governments). Yes, it makes me, and other developers, more productive. And it can more quickly solve certain problems that were time consuming or laborious to solve. And it might lead to new and greater scientific and technological advances.
But those gains do not outweigh all of the negatives: concentration of power and capital into an increasingly small group, the eventual loss of untold millions of jobs (with, as of yet, not even a shred of indication of what might be replace them), the loss of skills in the next generations who are delegating much of their critical thinking (or thinking period), to ChatGPT; the loss of trust in society now that any believable video can be easily generated; the concentration of power in the the control of information if everyone is getting their info from LLMs instead of the open internet (and ultimately, potentially the death of the open internet); the explosion in energy consumption by data centers which exacerbates rather than mitigates global warming; and plenty more.
AI might allow us to find better technological solutions to world hunger, poverty, mental health, water shortages, climate change, and war. But none of those problems are technological problems; technology only plays a small part. And the really important part is being negatively exacerbated by the "AI arms race". That's why I, who was my whole life a technological optimist, am no longer hopeful for the future. I wish I was.
> facts are facts, and AI is going to change programming forever
Show me these "facts"
I see AI effect as exact opposite, a turbo version of "lisp curse".
I feel like the use of the term "anti-AI hype" is not really fully explored here. Even limiting myself to tech-related applications - I'm frankly sick of companies trying to shove half-baked "AI features" down my throat, and the enshittification of services that ensues. That has little to do with using LLMs as coding assistants, and yet I think it is still an essential part of the "anti-AI hype".
> the more isolated, and the more textually representable, the better: system programming is particularly apt
I’ve written complete GUIs in 3D on the front end. This GUI was non traditional. It allows you to playback, pause speed up, slow down and rewind a gps track like a movie. There is real time color changing and drawing of the track as the playback occurs.
Using mapbox to do this straight would be to slow. I told the AI to optimize it by going straight into shader extensions for mapbox to optimize GPU code.
Make no mistake. LLMs are incredible for things that are non systems based that require interaction with 3D and GUIs.
> As a programmer, I want to write more open source than ever, now.
I want to write less, just knowing that LLM models are going to be trained on my code is making me feel more strongly than ever that my open source contributions will simply be stolen.
Am I wrong to feel this? Is anyone else concerned about this? We've already seen some pretty strong evidence of this with Tailwind.
AI has a significant risk of directly leading to the extinction of our species, according to leading AI researchers. We should be worried about a lot more than job losses.
What happens if the bubble bursts - can we still use all the powerful models to create all this code? Aren't all the agents effectively using venture capital today? Is this sustainable?
If I can run an agent on my machine, with no remote backend required, the problem is solved. But right now, aren't all developers throwing themselves into agentic software development betting that these services will always be available to them at a relatively low cost?
SOTA LLMs are now quite good at typing out code that passes tests. If you are able to instruct the creation of sufficient tests and understand the code generated structurally, there is a significant multiplier in productivity. I have found LLMs to be hugely useful in understanding codebases more quickly. Granted it may be necessary to get 2nd opinions and fact check what is stated, but there is a big door now open to anyone to educate themselves.
I think there are some negative consequences to this; perhaps a new form of burn out. With the force multiplier and assisted learning utility comes a substantial increase in opportunity cost.
People are afraid, because while AI seemingly gobbles up programmer jobs, on the economic side there are no guardrails visible or planned whatsoever.
"Nah uh I'm not falling for hype _you're_ falling for hype."
There is too much money invested in AI. You can't trust anyone talking about it.
> What is the social solution, then? Innovation can't be taken back after all.
It definitely can.
The innovation that was the open, social web of 20 years ago? still an option, but drowned between closed ad-fueled toxic gardens and drained by AI illegal copy bots.
The innovation that was democracy? Purposely under attack in every single place it still exists today.
Insulin at almost no cost (because it costs next to nothing to produce)? Out of the question for people that live under the regime of pharmaceutical corporations that are not reigned by government, by collective rules.
So, a technology that has a dubious ROI over the energy and water and land consumed, incites illegal activities and suicides, and that is in the process of killing the consumer public IT market for the next 5 years if not more, because one unprofitable company without solid verifiable prospects managed to pass dubious orders with unproven money that lock memory components for unproven data centers... yes, it definitely can be taken back.
This is the first time I hear sentiments against "AI" hype be referred to as hype itself. Yes, there are people ignoring this technology altogether, possibly to their own detriment, but at the stage where we are now it is perfectly reasonable to want to avoid the actual hype.
What I would really urge people to avoid doing is listening to what any tech influencer has to say, including antirez. I really don't care what famous developers think about this technology, and it doesn't influence my own experience of it. People should try out whatever they're comfortable with, and make up their own opinions, instead of listening what anyone else has to say about it. This applies to anything, of course, but it's particularly important for the technology bubble we're currently in.
It's unfortunate that some voices are louder than others in this parasocial web we've built. Those with larger loudspeakers should be conscious of this fact, and moderate their output responsibly. It starts by not telling people what to do.
I've found awesome use cases for quick prototyping. It saves me days when I can just describe the final step and iterate on it backwards to perfection and showcase an idea.
> Writing code is no longer needed for the most part.
Said by someone who spent his career writing code, it lacks a bit of details... a more correct way to phrase it is: "if you're already an expert in good coding, now you can use these tools to skip most of code writing"
LLMs today are mostly some kind of "fill-in-the-blanks automation". As a coder, you try to create constraints (define types for typechecking constraints, define tests for testing constraints, define the general ideas you want the LLM to code because you already know about the domain and how coding works), then you let the model "fill-in the blanks" and you regularly check that all tests pass, etc
"Die a hero or live long enough to see yourself become the villain"
AI is both a near-perfect propaganda machine and, in the programming front, a self-fulfilling prophecy: yes, AI will be better at coding than human. Mostly because humans are made worse by using AI.
Another one of these sickening pieces. Framing opposition to an expensive tech that doesn't work as "anti". I tried letting the absolutely newest models write c++ today again. Gpt 5.1 and opus 4.5. single function with two or less input parameters, a nice return value, doing simple geometry with the glm library. Yes the code worked. But I took as long fixing the weird parts as it would have taken me myself. And I still don't trust the result, because reviewing is so much harder than writing.
There's still no point. Resharper and clang-tidy still have more value than all LLMs. It's not just a hype, it's a bloody cult, right besides those nft and church of COVID people.
> LLMs are going to help us to write better software
No, I really don't think they will. Software has only been getting worse, and LLMs are accelerating the rate at which incompetent developers can pump out low quality code they don't understand and can't possibly improve.
We are 5 years in... it's fine to be sceptical. The model advancements are in the single digits now. It's not on us that they promised the world 3 years ago. It's fine and will be just fine for the next few years. A real breakthrough is at least another 5 years away and if it comes everything you do now will be obsolete. Nobody will need or care about the dude that Sloperatored Claude Code on release and that's the reality everyone who goes full AI evangelist needs to understand. You are just a stopgap. The knowledge you are accumulating now is just worthless transitional knowledge. There is no need for FOMO and there is nothing hard operating LLMs for coding and it will get easier by the day.
The anti-AI hype, in the context of software development, seems to focus on a few things:
> AI code is slop, therefore you shouldn't use it
You should learn how to responsibly use it as a tool, not a replacement for you. This can be done, people are doing it, people like Salvatore (antirez), Mitchell (of Terraform/Ghostty fame), Simon (swillison) and many others are publicly talking about it.
> AI can't code XYZ
It's not all-or-nothing. Use it where it works for you, don't use it where it doesn't. And btw, do check that you actually described the problem well. Slop-in, slop-out. Not sayin' this is always the case, but turns out it's the case surprisingly often. Just sayin'
> AI will atrophy your skills, or prevent you from learning new ones, therefore you shouldn't use it
Again, you should know where and how to use it. Don't tune out while doing coding. Don't just skim the generated code. Be curious, take your time. This is entirely up to you.
> AI takes away the fun part (coding) and intensifies the boring (management)
I love programming but TBH, for non-toy projects that need to go into production, at least three quarters are boring boilerplate. And making that part interesting is one of the worst things you can do in software development! That path lies resume-driven development, architecture astronautics, abusing design patterns du jour, and other sins that will make code maintenance on that thing a nightmare! You want boring, stable, simple. AI excels at that. Then you can focus on the small tiny bit that's fun and hand-craft that!
Also, you can always code for fun. Many people with boring coding jobs code for fun in the evenings. AI changes nothing here (except possibly improving the day job drudgery).
> AI is financially unsustainable, companies are losing money
Perhaps, and we're probably in the bubble. Doesn't detract from the fact that these things exist, are here now, work. OpenAI and Anthropic can go out of business tomorrow, the few TB of weights will be easily reused by someone else. The tech will stay.
> AI steals your open source code, therefore you shouldn't write open-source
Well, use AI to write your closed-source code. You don't need to open source anything if you're worried someone (AI or human) will steal it. If you don't want to use something on moral grounds, that's a perfectly fine thing to do. Others may have different opinion on this.
> AI will kill your open source business, therefore you shouldn't write open-source
Open source is not a business model (I've been saying this for longer than median user of this site has been alive). AI doesn't change that.
As @antirez points out, you can use AI or not, but don't go hiding under a rock and then being surprised in a few years when you come out and find the software development profession completely unrecognizable.
the end run around copyright, is TOS that are forced on users, through distribution chanels(platforms),service providors, and actual "patented" hardware, so money will continue to flow up, not sideways. Given that there are a very limited number of things that can actualy be done with computer/phones, and it becomes clear that "AI" can arrange those in any possible configuration, the rest is deciding if it will jive with the users, and noticing when it doesn't, which I believe that AI will be unable to disern from other AI slop, imitating actual useres
[dead]
[dead]
[flagged]
If a middle eastern human affects my way of life they are terrorists, but if a corporation does it, I should learn to live with it. License and laws are for peasants.
I love Antirez.
> However, this technology is far too important to be in the hands of a few companies.
This is the most important assessment and we should all heed this warning with great care.
If we think hyperscalers are bad, imagine what happens if they control and dictate the entire future.
Our cellphones are prisons. The entire internet and all of technology could soon become the same.
We need to bust this open now or face a future where we are truly serfs.
I'm excited by AI and I love what it can do, but we are in a mortally precarious position.
The shift isn’t about replacing programmers, it’s about changing what programming means—from writing every line to designing, guiding, and validating. Excited to see how open source and small teams can leverage this without being drowned by centralization.
I wonder if being a literal AI sci-fi author, antirez acknowledges that there's possible bias and willingness to extrapolate here? That said, I respect his work immensely and I do put a lot of weight to his recommendations. But I'd really prefer the hype fog that's clouding signal [for me] to dissipate a bit - maybe economic realities will sort this out soon.
There's also a short-termism aspect of AI generated code that's seemingly not addressed as much. Don't pee your pants in the winter to keep warm.
Not even antirez can sway the skeptics here. People that have garnered too many upvotes in the countless comments about how worthless AI is compared to real programmers will need much more to leave their fortresses.
But maybe we should cherish these people. Maybe it's among them we find the embryo to the resistance - people who held out when most of us were seduced - seduced into giving the machine all our knowledge, all our skills, all the secrets about us we were not even aware of ourselves - and setting it up to be orders of magnitude more intelligent than any of us, combined. And finally - just as mean, vindictive and selfish as most of the people in the training data on which it was trained.
Maybe it's good to stay skeptical a bit longer.
The paragraph that was started with this sentence:
> However, this technology is far too important to be in the hands of a few companies.
I wholeheartedly agree 1000%. Something needs to change this landscape in the US.
Furthermore, the entire open source models being dominated by China is also problematic.