I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.
Many people on this forum are suffering under this same psychosis.
I am really looking for more reasoned approaches to AI.
I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.
I work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.
The only way many people learn that the stove is hot is by burning their hands on it.
Let them.
Make the most of it. Their delusion is your opportunity.
'AI psychosis' is a slop concept.
Hype & greed are a hell of a drug
Psychosis means inability to distinguish the real from the not real -- delusion. I don't think the article describes that, at least not in a literal or clinical sense. The author lifted a term usually applied to people who fall in love with chatbots and applied it to the context of software developers not understanding AI coding tools, and the limitations of those tools.
AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.
AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."
In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.
With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).
AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.
[flagged]
[flagged]
This post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"
the top reply is from someone doing exactly that, arguing "but the agents are so fast!"
This is... Not what psychosis means? Being wrong is not psychosis
Pointing out the obvious.
A lot of companies have been under AI psychosis for years and will be forever.
[flagged]
When war psychosis is not enough....
[flagged]
[flagged]
[dead]
[flagged]
First DEI, then COVID, then Ukraine, then AI. The US always needs its three to five years mass psychosis and then moves to the next shiny object. Many people and corporations get rich in each cycle.
AI exacerbates the problem since vulnerable tech people develop individual AI psychosis and participate in the mass psychosis.
Companies have figured out that no other population group is as gullible as tech people (they were instrumental in pushing all of the above four issues), so they exploit it again and again.
Assuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.
And also, he might not be right. But the good news is, we’ll all get to find out together!
Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.
This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)
It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.
Welcome to the club, Mitchell! Pizza's to the right.
In all seriousness...well, yeah. AI is a monkey's paw, and that's how monkey paws work. So many movies and books warned us!