logoalt Hacker News

I believe there are entire companies right now under AI psychosis

892 pointsby reasonablekloutyesterday at 8:26 PM384 commentsview on HN

https://xcancel.com/mitchellh/status/2055380239711457578

https://hachyderm.io/@mitchellh/116580433508108130


Comments

nunezyesterday at 9:54 PM

Welcome to the club, Mitchell! Pizza's to the right.

In all seriousness...well, yeah. AI is a monkey's paw, and that's how monkey paws work. So many movies and books warned us!

show 1 reply
slopinthebagyesterday at 9:17 PM

I have a ton of respect for Mitchell - I didn't really know who he was until Ghostty but his writings and viewpoints on AI seem really grounded and make the most sense to me. Including this one.

Many people on this forum are suffering under this same psychosis.

show 1 reply
LAC-Techyesterday at 10:23 PM

I am really looking for more reasoned approaches to AI.

I am very close to using it as a pair programmer, but with me actually coding. I am just so tired of fixing its mistakes.

show 2 replies
daneel_wyesterday at 10:43 PM

I work for a small telecom services provider whose current VP immediately set an AI course when stepping on board 6 months ago. Involving AI in everything and every task is now our first priority - across all employee segments, not just us system developers - and leadership is embarking on a program to measure employees' AI usage levels as a means to gauge everyone's individual efficiency. It's like the era of the evangelic crypto bros all over again.

mattgreenrocksyesterday at 9:25 PM

The only way many people learn that the stove is hot is by burning their hands on it.

Let them.

show 1 reply
Apocryphonyesterday at 11:00 PM

Make the most of it. Their delusion is your opportunity.

gverrillayesterday at 10:44 PM

'AI psychosis' is a slop concept.

topherPedersenyesterday at 10:49 PM

Hype & greed are a hell of a drug

gregjortoday at 2:05 AM

Psychosis means inability to distinguish the real from the not real -- delusion. I don't think the article describes that, at least not in a literal or clinical sense. The author lifted a term usually applied to people who fall in love with chatbots and applied it to the context of software developers not understanding AI coding tools, and the limitations of those tools.

AI coding swept over the software industry faster than most previous trends. OOP and its predecessor "structured programming" took a lot longer. Agile and XP got traction fairly quickly but still took longer than AI -- and met with much of the same kind of resistance and dire predictions of slop and incompetence.

AI tools have led to two parallel delusions: The one Mitchell Hashimoto describes, and the notion that we (programmers) knew how to produce solid, reliable, useful, maintainable code before AI slop came along. As always with tools that give newbs, juniors, managers some leverage (real or imagined) we -- programmers -- get upset and react to the threat with dire warnings. We talk about "technical debt" and "maintainability" and "scalability."

In fact the large majority of non-trivial software projects fail to even meet requirements, much less deliver maintainable code with no tech debt. Most programmers don't know how to write good code for any measure of "good." Our entire industry looks more like a decades-long study of the Dunning-Kruger effect than a rigorous engineering discipline. If we knew how to write reliable code with no tech debt we could teach that to LLMs, but instead we reliably get back the same kind of mediocre code the LLMs trained on (ours), only the LLMs piece it together faster than we can.

With 50 years in the business behind me, and several years of mocking and dismissing AI coding whenever someone brought it up, I got dragged into it by my employer. And then I saw that with guidance and a critical eye, reasonably good specs, guardrails, it performed just as well and sometimes more throroughly than me and almost all of the people I have worked with during my career. It writes better code and notices mistakes, regressions, edge cases better than I can (at least in any reasonable amount of time).

AI coding tools only have to perform better -- for whatever that means to an organization -- than the median programmers. If we set the bar at "perfect" they of course fail, but so do we. We always have. Right now almost all of the buggy, insecure, ugly, confusing software I use came from teams of human programmers who didn't use AI. That will quickly change and I can blame the bugs and crashes and data losses and downtime on AI, we all can, but let's not pretend we're really losing ground with these tools or that we could all, as an industry, do better than the LLMs, because all experience shows that we can't.

phoebe_buildstoday at 1:30 AM

[flagged]

openclawclubtoday at 1:08 AM

[flagged]

taffydavidyesterday at 9:16 PM

This post calls out how you can't argue with these people because they say its fine to ship bugs because the agents will fix them so quickly and at a scale humans can't do!"

the top reply is from someone doing exactly that, arguing "but the agents are so fast!"

show 8 replies
singpolyma3yesterday at 10:02 PM

This is... Not what psychosis means? Being wrong is not psychosis

show 2 replies
hopppyesterday at 11:26 PM

Pointing out the obvious.

A lot of companies have been under AI psychosis for years and will be forever.

squirrelontoday at 12:27 AM

[flagged]

bolangiyesterday at 9:23 PM

When war psychosis is not enough....

panavmyesterday at 9:37 PM

[flagged]

vivianzhetoday at 1:17 AM

[flagged]

zombiwoofyesterday at 9:42 PM

[dead]

jgbuddyyesterday at 9:21 PM

[flagged]

klashnyesterday at 9:38 PM

First DEI, then COVID, then Ukraine, then AI. The US always needs its three to five years mass psychosis and then moves to the next shiny object. Many people and corporations get rich in each cycle.

AI exacerbates the problem since vulnerable tech people develop individual AI psychosis and participate in the mass psychosis.

Companies have figured out that no other population group is as gullible as tech people (they were instrumental in pushing all of the above four issues), so they exploit it again and again.

show 2 replies
senordevnycyesterday at 9:06 PM

Assuming he’s right, I don’t see how that constitutes “psychosis”, as opposed to this beyond yet another of a billion examples of companies jumping on a bandwagon / cargo cult, and then learning they took it too far.

And also, he might not be right. But the good news is, we’ll all get to find out together!

selectivelyyesterday at 9:19 PM

I do not believe 'AI psychosis' is an actual thing.

show 1 reply
elevationyesterday at 9:16 PM

Mitchell aches because his career has been solving broadly scoped problems by building a collection of thoughtful primitives for others to extend. LLMs seem to do the opposite but at great speed, and it hurts to watch.

show 1 reply
woeiruayesterday at 9:13 PM

This doesn’t constitute AI psychosis. His argument is that we need to retain understanding of the systems we use, but there’s no compelling argument as to why that is the case. (I get that people are going to be offended by that statement, but agents are already better than the average software engineer. I don’t see why we need to fight this, except for economic insecurity caused by mass layoffs.)

It all just feels like horse drawn carriage operators trying to convince automobile drivers to stop driving.

show 3 replies