Related: Rob Pike got spammed with an AI slop "act of kindness" - https://news.ycombinator.com/item?id=46394867
> And by the way, training your monster on data produced in part by my own hands, without attribution or compensation.
> To the others: I apologize to the world at large for my inadvertent, naive if minor role in enabling this assault.
this is my position too, I regret every single piece of open source software I ever produced
and I will produce no more
[dead]
[dead]
[flagged]
[flagged]
A tad uncalled for, don't you think?
[flagged]
strong emotioms, weak epistemics .. for someone with Pike’s engineering pedigree, this reads more like moral venting .. with little acknowledgment of the very real benefits AI is already delivering ..
I’m in tears. This is so refreshing. I look forward to more chimpouts from Googlers LMAO
Hear hear
I'm not claiming he is mainly motivated by this but it's a fact that his life work will become moot over the next few years as all programming languages become redundant - at least as a healthy multiplicity of approaches as present, it's quite possible at least a subconscious factor in his resentment.
I expect this to be an unpopular opinion but take no pleasure in noting that - I've coded since being a kid but that era is nearly over.
When I read Rob's work and learn from it, and make it part of my cognitive core, nobody is particularly threatened by it. When a machine does the same it feels very threatening to many people, a kind of theft by an alien creature busily consuming us all and shitting out slop.
I really don't know if in twenty years the zeitgeist will see us as primitives that didn't understand that the camera is stealing our souls with each picture, or as primitives who had a bizarre superstition about cameras stealing our souls.
Yes this reads as a massive backhanded compliment. But as u/KronisLV said, its trendy to hate on AI now. In the face of something many in the industry don't understand, that is mechanizing away a lot of labor, that clearly isn't going away, there is a reaction that is not positive or even productive but somehow destructive: this thing is trash, it stole from us, it's a waste of money, destroys the environment, etc...therefore it must be "resisted." Even with all the underhanded work, the means-ends logic of OpenAI and other major companies involved in developing the technology, there is still no point in stopping it. There was a group of people who tried to stop the mechanical loom because it took work away from weavers, took away their craft--we call them luddites. But now it doesn't take weeks and weeks to produce a single piece of clothing. Everyone can easily afford to dress themselves. Society became wealthier. These LLMs, at the very least they let anyone learn anything, start any project, on a whim. They let people create things in minutes that used to take hours. They are "creating value," even if its "slop" even if its not carefully crafted. Them's the breaks--we'd all like our clothing hand-weaved if it made any sense. But even in a world where one could have the time to sit down and weave their own clothing, carefully write out each and every line of code, it would only be harmful to take these new machines away, disable them just because we are afraid of what they can do. The same technology that created the atom bomb also created the nuclear reactor.
“But where the danger is, also grows the saving power.”
He worked in well paying jobs, probably traveles, has a car and a house and complains about toxic products etc.
Yes there has to be a discussion on this and yeah he might generally have the right mindset, but lets be honest here: No one of them would have developed any of it just for free.
We all are slaves to capitalism
and this is were my point comes: Extrem fast and massive automatisation around the globe might be the only think pushing us close enough to the edge that we all accept capitalisms end.
And yes i think it is still massivly beneficial that my open source code helped creating something which allows researchers to write easier and faster better code to push humanity forward. Or enables more people overall to have/gain access to writing code or the result of what writing code produces: Tools etc.
@Rob its spam, thats it. Get over it, you are rich and your riches did not came out of thin air.
Can't really fault him for having this feeling. The value proposition of software engineering is completely different past later half of 2025, I guess it is fair for pioneers of the past to feel little left behind.
468 comments.... guys, guys, this is a Blue Sky post! Have we not learned that anyone who self-exiled to Blue Sky is wearing a "don't take me seriously" badge for our convenience?
He gets very angry about things. I remember arguing over how go is a meme language because the syntax is really stupid and wrong.
e.g. replacing logical syntax like "int x" with "var x int", which is much more difficult to process by both machine and human and offers no benefits whatsoever.
It sucks and I hate it but this is an incredible steam engine engineer, who invented complex gasket designs and belt based power delivery mechanisms lamenting the loss of steam as the dominant technology. We are entering a new era and method for humans to tell computers what to do. We can marvel at the ingenuity that went into technology of the past, but the world will move onto the combustion engine and electricity and there’s just not much we can do about it other than very strong regulation, and fighting for the technology to benefit the people rather than just the share price.
There's a lot of irony in this rant. Rob was instrumental in developing distributed computing and cloud technologies that directly contributed to the advent of AI.
I wish he had written something with more substance. I would have been able to understand his points better than a series of "F bombs". I've looked up to Rob for decades. I think he has a lot of wisdom he could impart, but this wasn't it.
I don’t understand why anyone thinks we have a choice on AI. If America doesn’t win, other countries will. We don’t live in a Utopia, and getting the entire world to behave a certain way is impossible (see covid). Yes, AI videos and spam is annoying, but the cat is out of the bag. Use AI where it’s useful and get with the programme.
The bigger issue everyone should be focusing on is growing hypocrisy and overly puritan viewpoints thinking they are holier and righter than anyone else. That’s the real plague
From a quick read it seems pretty obvious that the author doesn’t speak English as a native language. You can tell because some of the sentences are full of grammatical errors (ie probably written by the author) and some are not (probably AI-assisted).
My guess is they wrote a thank you note and asked Claude to clean up the grammar, etc. This reads to me as a fairly benign gesture, no worse than putting a thank you note through Google Translate. That the discourse is polarized to a point that such a gesture causes Rob Pike to “go nuclear” is unfortunate.
This reads like a mid-life crisis. A few rebuttals:
1. Yes, humans cause enormous harm. That’s not new, and it’s not something a single technology wave created. No amount of recycling or moral posturing changes the underlying reality that life on Earth operates under competitive, extractive pressures. Instead of fighting it, maybe try to accept it and make progress in other ways?
2. LLMs will almost certainly deliver broad, tangible benefits to ordinary people over time; just as previous waves of computing did. The Industrial Revolution was dirty, unfair, and often brutal, yet it still lifted billions out of extreme poverty in the long run. Modern computing followed the same pattern. LLMs are a mere continuation of this trend.
Concerns about attribution, compensation, and energy use are reasonable to discuss, but framing them as proof that the entire trajectory is immoral or doomed misses the larger picture. If history is any guide, the net human benefit will vastly outweigh the costs, even if the transition is messy and imperfect.
From my point of view, many programmers hate Gen AI because they feel like they've lost a lot of power. With LLMs advancing, they go from kings of the company to normal employees. This is not unlike many industries where some technology or machine automates much of what they do and they resist.
For programmers, they lose the power to command a huge salary writing software and to "bully" non-technical people in the company around.
Traditional programmers are no longer some of the highest paid tech people around. It's AI engineers/researchers. Obviously many software devs can transition into AI devs but it involves learning, starting from the bottom, etc. For older entrenched programmers, it's not always easy to transition from something they're familiar with.
Losing the ability to "bully" business people inside tech companies is a hard pill to swallow for many software devs. I remember the CEO of my tech company having to bend the knees to keep the software team happy so they don't leave and because he doesn't have insights into how the software is written. Meanwhile, he had no problem overwhelming business folks in meetings. Software devs always talked to the CEO with confidence because they knew something he didn't, the code.
When a product manager can generate a highly detailed and working demo of what he wants in 5 minutes using gen AI, the traditional software developer loses a ton of power in tech companies.
/signed as someone who writes software
[dead]