It is nice to hear someone who is so influential just come out and say it. At my workplace, the expectation is that everyone will use AI in their daily software dev work. It's a difficult position for those of us who feel that using AI is immoral due to the large scale theft of the labor of many of our fellow developers, not to mention the many huge data centers being built and their need for electricity, pushing up prices for people who need to, ya know, heat their homes and eat
Now y'all finally know what it's like to be vegetarian (I'm not one). So many parallels. And they are expected to relatively keep quiet about it and not scream about things like
> Raping the planet, spending trillions on toxic, unrecyclable equipment while blowing up society
Because screaming anything like that immediately gets them treated as social pariahs. Even though it applies even harder to modern industrialized meat consumption than to AI usage.
Overton window and all that.
I truly don’t understand this tendency among tech workers.
We were contributing to natural resource destruction in exchange for salary and GDP growth before GenAI, and we’re doing the same after. The idea that this has somehow 10x’d resource consumption or emissions or anything is incorrect. Every single work trip that requires you to get on a plane is many orders of magnitude more harmful.
We’ve been compromising on those morals for our whole career. The needle moved just a little bit, and suddenly everyone’s harm thresholds have been crossed?
They expect you to use GenAI just like they expected accountants to learn Excel when it came out. This is the job, it has always been the job.
I’m not an AI apologist. I avoid it for many things. I just find this sudden moral outrage by tech workers to be quite intellectually lazy and revisionist about what it is we were all doing just a few years ago.
I don't feel it's immoral, I just don't want to use it.
I find it easier to write the code and not have to convince some AI to spit out a bunch of code that I'll then have to review anyway.
Plus, I'm in a position where programmers will use AI and then ask me to help them sort out why it didn't work. So I've decided I won't use it and I will not waste my time figuring why other people's AI slop doesn't work.
do you apply same standards when you say buy a phone?! never gonna buy iphone cause we know how and by whom they are made? never going to use any social media apps cause … well you see where this is going? you seem to be randomly putting a foot down on “issue du jour”…
Copying isn’t theft, and it’s DEFINITELY not theft of labor.
Then again, you already knew this because we’ve been pointing it out to the RIAA and MPAA and the copyright cartels for decades now.
It is my personal opinion that attempts to reframe AI training as criminal are in bad faith, and come from the fact that AI haters have no legitimate basis of damages from which to have any say in the matter about AI training, which harms no one.
Now that it’s a convenient cudgel in the anti-AI ragefest, people have reverted to parroting the MPAA’s ideology from the 2000s. You wouldn’t download a training set!
... not to mention that most of the time, what AI produces is unmitigated slop and factual mistakes, deliberately coated in dopamine-infusing brown-nosing. I refuse for my position, even profession, to be debased to AI slop reviewer.
I use AI sparingly, extremely distrustfully, and only as a (sometimes) more effective web search engine (it turns out that associating human-written documents with human-asked questions is an area where modeling human language well can make a difference).
(In no small part, Google has brought this tendency on themselves, by eviscerating Google Search.)