I'm pretty sure he's talking about companies and people outsourcing their decision making and thinking to AI and not really about using AI itself.
I don't think using AI to write code is AI psychosis or bad at all, but if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter. They literally post screenshots of ChatGPT as their thinking and reasoning about the topic instead of just doing a little bit of thinking themselves.
These things are dog shit when it comes to ideas, thinking, or providing advice because they are pattern matchers they are just going to give you the pattern they see. Most people see this if you just try to talk to it about an idea. They often just spit out the most generic dog shit.
This however it pretty useful for certain tasks were pattern matching is actually beneficial like writing code, but again you just can't let it do the thinking and decision making.
The way I put this to myself is that AI gives “correct correct answers and incorrect correct answers”.
They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.
Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.
when you outsource thinking to AI, you get that magical speed up. the agent is making decisions for you, so things move at agent speed. it often makes decisions without telling you, and the final "here's the plan" output often requires you to understand the problem at great depth, which requires return to human speed, so you skim and just approve.
the trick is to be mindful, aware, and deliberate about what decisions are being outsourced. this requires slowing down, losing that absurd 10x vibe coding gain. in exchange, youre more "in-the-loop" and accumulate less cognitive debt.
find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.
make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
tell the agent to halt on ambiguity.
a good engineer will get a 2x or 3x speedup without the downsides.
I wonder how different this is from having companies let Fortune or Inc magazine do their thinking for them.
Or random consultants.
Is "AI said it was a good idea" and worse than "we were following industry trends"?
Several people I know have already gone through phases like this. When you're doing it alone there is a moderating factor when their friends and family start calling them out on their behavior or weird things they say.
I can't imagine how bad it would be if your employer started doing this from the leadership. You'd be pressured to get on board or fear getting fired. Nobody would be trying to moderate your thinking except your coworkers who disagree with it, but those people are going to leave or be fired. If you want to keep your job, you have to play along.
> if you just prompt the AI and believe what it tell you then you have AI psychosis
This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.
However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.
Maybe we reclaim “toked out” from our misspent youths?
e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”
He uses AI himself, so I agree he doesn't see AI use as black/white.
Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.
> if you just prompt the AI and believe what it tell you then you have AI psychosis. You see this a lot with financial people and VC on twitter
I'm seeing it with lawyers, too. Like, about law. (Just not in their subject matter.) To the point that I had a lawyer using Perplexity to disagree with actual legal advice I got from a subject-matter expert.
I digress; this article actually has helped identify useful knowledge gaps around topics I have researched. https://drensin.medium.com/elephants-goldfish-and-the-new-go...
While you have to think about things objectively no matter what, when I start researching topics like physics, using AI as suggested in that article has proven very useful.
I didn’t think just offloading your thinking to AI was AI psychosis.
To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.
But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.
I agree with you, except it isn't even good at writing code. Almost every time that you get an LLM to write a bunch of code for you, it has mistakes in it. The logic isn't right, the API calls aren't right, the syntax isn't right (!). That problem hasn't yet been fixed and it looks as though it never will be. That means that every line of code it generates, you have to review, because even if 95% of the code is correct, you need to find the 5% which isn't. But if you have to do that, it becomes slower than just writing the code yourself. As people have pointed out over and over again: typing in the code was never the part that took time. So I don't agree that LLMs are really useful for writing code.
> companies and people outsourcing their decision making and thinking to AI
It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".
So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.
Correct. I use AI a ton and I'm having more fun every day than I ever did before thanks to it (on average, highs are higher, lows are lower). Your characterization is all very accurate. Thank you.
Here's some other topics I've written on it:
- https://mitchellh.com/writing/my-ai-adoption-journey
- https://mitchellh.com/writing/building-block-economy
- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)