I feel your pain.
Today I'm forcing myself to learn SwiftUI and type each character with my hands, there is a part of me asking "Why are you wasting your time instead of prompting it and getting the UI you want in minutes?". Well, even I use AI I must know the domain I'm operating in to create good products instead of useless slop. Even though I've been coding for 20 years now, I still need to be humble to grown in anything new. I can vibecode full apps but I'm not gonna pretend that my experience isn't playing a massive role in guiding the models.
Don't let AI take away your joy for building stuff, it's totally fine not being "productive" and taking your time. Just force yourself to have, at least, 2 AI days off every week.
It's literally changing your brain when you don't use it like you used to. So, yeah. It is.
> With coding, I've been using AI entirely for a year or two. I've been entirely prompting and I haven't written a single line of code. I have mostly forgotten how to code
I've been using AI coding tools a lot lately, though I'm always in the loop. I write most of the important code by hand, but I like to send Claude Code or Codex off to try to come up with a solution in parallel to compare.
Having reviewed so much of my hand-written code side by side with AI-written alternatives, I am still amazed that anyone admits to letting AI write all of their code. Either you're working on much simpler problems than I am, or you don't really care about anything other than making the tests go green and waiting for bug reports to come back so you can feed them back into the LLM again.
Some times the coding tools come back with better ideas than I came up with. Some times my idea is much better. Most often with medium to high complexity problems, if the AI comes up with a working solution it has enough problems that an attentive human reviewer would have rejected it at best. At worst, it creates a mess of spaghetti code with maintenance time bombs ticking away. And that's for one change. I can't imagine what a codebase would look like if you completely deferred to AI tools to do everything.
This quote is even weirder because they claim to have been doing this for two years! Two years ago, coding tools were much worse than they are today. Using AI to write all of your code 2 years ago would have been a weird choice.
When I read posts like this I don't know what to think. Is this real? Or is it exaggerated for effect?
I also roll my eyes a little bit at the idea that not writing code for 1-2 years means you forget how to code. I've been back and forth between 100% management and 100% IC in my career. While there is a warm-up time to get back into coding, you should not completely forget how to code after such a short time. The only reason this person feels like they've forgotten how to code is that they've made a choice not to code for 2 years and, apparently, they don't feel like making any effort to change this. For someone who claims to love writing code, I don't get it. Something doesn't make sense about this writing.
I would argue the last 20 years of app development is what made people dumb.
During the "don't make me think" era of software design, if you wanted to make software you got really good at identifying the use case and using design thinking to optimize the paths to goal. You could make a business around a very narrow set of flows. The only thinking a user had to do was pick The App for That. They never had to think about how they want to approach their task, which is a skill in itself.
AI isn't like that. There's a million ways to use it. That's a big part of what makes it cool, but it requires the user to thoughtfully approach their workflows. Not everyone is used to doing that.
This is such an american who reads the atlantic without scuritny and gets manpiulated vibe
It is making me feel less dumb when I use it to get Linux admin things done because 1) it gets it wrong and I have to help it and 2) even though I would have gotten frustrated and given up without AI it shows me that Linux has gotten way out of hand for administration. Wheels have been reinvented and conventions have been changed for no good reason, or because of https://xkcd.com/927/
using AI to red-team your thoughts and assumptions is the fastest way to get smart since the dawn of time
> AI is making me dumb
We'll have AGI not because AI is getting smarter, but because we are getting dumber.
Can a programmer who has no personal feelings of attachment to slop, make it all the way to production and maintained?
Reflecting on the comments, I did this to myself. I should have titled this "I used AI to make me dumb".
before I ask AI to write anything, I prepare a plan, I was very positively surprised when noticed Plan mode in Codex recently. It make me feel that maybe others doing the same and that's why they added it. Anyway, I start with plan, then ask AI to do just one step.
If coding a new feature, I do one step and check the code, doing git diff, reading changes, or just asking Codex, to show me changes.
If writing an article, I ask for only one paragraph. I read paragraph and if it is ok, I accept it, if it doesn't show off my thoughts I work on one paragraph.
If doing data analysis with AI, I do one step of analysis and ask AI to display intermediate results so I can see if all is going in good direction and there are no hallucinations, additionally I have follow-up prompts for AI to do results verification. If all looks good, then I continue to the next step.
I don't like situation when I ask AI to do all code changes, or all article, or all data analysis in one pass with one prompt. It is simply impossible to check if AI is correct and results are not satisfactory. You can easily see this when asking AI to write a deep article with one prompt - you clearly see that it doesn't reflect your thoughts.
Maybe step-by-step is the approach to use AI and not feel dumber.
When it comes to writing I only use AI for writing technical documentation or basic page copy, never for anything that is purely writing or for communication like a blog post, or a paper, or an email, or a letter, or a book, or even marketing copy.
Basically only use it for anything I wouldn’t otherwise have the time to write but isn’t important to be written by me.
I actually can’t fathom using it for writing as a principle, to me it’s just a keyboard extension for code generation, never a replacement for the written word that should be in my voice and fully a stream coming from my mind that I should have full editorial awareness and memory of.
Now that I think about it I’m a snob in this regard, I turn my nose up at people that use AI to write things that are purely written, in my mind using it for writing is defeating the purpose of writing!
We need to separate our emotions from these things. I understand why people don't like AI, or are fearful of it, but we need to have good faith arguments about it. Not this. These articles are just cope.
What a bizarre article. He laments the use of AI and then hopes that it might cause a flood of programmers.
Perhaps my ego is preventing me from becoming too addicted to LLMs. It's not that I think the tools are incapable. Rather, LLMs are probably far more capable than me in nearly every programming metric that matters.
However, if I were to release a solution that I 'vibe-coded' into the wild, then I would feel quite a bit of shame if someone figured out that I used an LLM to write the entire thing. I know it may come off as a bit silly, but it is a feeling I cannot seem to shake. A feeling that prevents me from wanting to adopting the technology in full force because... Well, I did not truly create the software if AI did all the work. Sure, the software might have been my idea, but that does not bring me much fulfillment.
I know programming is just a means to an end, but I feel like I have put in a lot of hard work over the past decade and a half just to barely scratch the surface of mediocrity. I was attracted to this field because I saw a sense of beauty in computer science (and programming). It felt like one of the few remaining options for a creative job that was spared from the cutthroat nature of the a career in the arts.
Like the Samurai class during the early industrialization of Japan, maybe it's time for me to lay down my sword too.
I try the compensate for skills atrophy with leetcodes and kata wars, but the plain reality is that real software engineering, the one requiring you to absorb and make a problem intimate is just not there.
The work rhythm has ballooned and as every co worker is now pushing work (generally mediocre but acceptable due to strong codebase fundamentals and them being good engineers) it is increasingly becoming a rat race of who delivers more. Companies don't even need to promote AI productivity because engineers being engineers will engineer the minimum effort required to deliver as much output that makes stakeholders happy.
I am less and less fond of this work.
I'm sure there will be people with different experiences, but I've never worked as much as I did in the last two years, but I'm too burned out. I genuinely feel I've regressed as an engineer and I see the same in my coworkers, some of them contributors to the highest impact OSSs you can think of.
Every day, I'm more and more leaning into changing industry.
I love code and programming and solving product problems. But the job has changed dramatically.
If the pay+comfort ratio wasn't that good I would've done that already.
It's hard to give up to 6/7k+ net per month in southern Mediterranean. I'm way better off financially than most US devs making even more, there's no comparison.
AI use and low confidence are correlated with lower ownership and deferment of critical thinking skills.
Based on the MIT and MSFT studies.
My company has 3 AI on every pull request now. They behave as follows:
1. a general coding AI: Completely broken. Should auto-comment, but never does anymore. Stopped a while back, nobody seems to know why.
2. another general AI: You have to at-chat it. It reacts to the message with <eyes emoji>, but never actually posts a comment?
3. a security bot. Comments, when it thinks there's a problem, in the most obtuse way possible. "SAST findings". But the findings are behind a link, and none of us devs are given access.
I could lean on and press the various people shoving AI down my gullet to like … look at this, and the actual lived experience of devs trying to derive productivity from this mess? But IDK what's in it for me, really.
Even Claude, when it worked, would comment in the most sociopathic manner possible: an English prose description of the problem, attached to an utterly unrelated line of code. Part of that is probably Github, who does let you attach comments to arbitrary lines of code in a review, only the blesséd lines can have comments. Literally none of our AI can format their complaint with a freaking suggested change (i.e., the Github feature, no, instead I get English prose).
Honestly for all I know we failed to pay the bill or something inane, but it would be nice if the AI could format an error message, or something.
Missed opportunity to name article “AI make me dumb dumb”
I feel lucky cause I started dumb. Unintentional level-up!
also, chatting with AI makes me impatient and delusional.
AI is not making anyone dumb. It lacks the volition and agency to do any such thing. People are making themselves dumb by choosing to, and how to, use AI.
Ai has been the best learning tool I have every used and it's not even close. I've learned more in the past year than I have in the past 5.
I was talking to some friends about this over drinks the other day. I feel it has the same effects as any drug (or behavior) that triggers dopamine. If I can get a dopamine hit for lower effort AI in 10 minutes, and maybe a tiny bit better of a hit doing it myself after a day, why would my brain go for anything but AI? Especially when my DIY muscles are a bit atrophied.
And of course the hedonic treadmill (if that's even valid any more, IDK) has reset the baseline so that anything less than the quick gratification feels like nothing. It makes the stuff I used to absolutely love feel like more of a chore compared to just cranking out features with code only an AI can love.
AI is keeping me on my toes. Many people in my org are experiencing the Dunning-Kruger effect after being armed with AI and are making such new and spectacular messes that I've had no other choice but to ratchet up governance controls. Improving documentation didn't help. The few people who read it complain to me when it is contradicted by AI.
Firstly I salute the author for saying these things. I mean we know the feeling of criticizing AI and certainly I criticize it a lot too, but when it comes to personal matters or how I am using AI, there are somethings I shy away from saying online and I wonder other people might feel the same way too.
So for example, once AI deleted my project, I was able to recover it but I lost version control through series of mistakes and IMO I lost a good version. (I think after abandoning that project and coming back, I was able to accomplish it)
Another example which is the one which is biting me the most is that I wanted to create a copy.sh/v86 based thing where you are able to edit the .img files of distros and save them all within the browser. I was able to run v86 custom way but I wasn't able to mount or have a proper way for making it work.
And now although I mean this is just an optional project and I just thought hey it would be fun to edit .img files in browser but now it feels like I get disappointed.
I think that disappointment is in both say a frustration of thing not working and secondly, just realizing that I might be dropping this idea altogether. Now I must admit that this is a field that I have absolutely no expertise at all in, but still, it feels disappointing to me and I kept thinking about it for sometime now.
I wonder how many people just feel that if AI is unable to make their project, to then either get frustrated/disappointed and even a salt of panic. I think its just wrong for how damn much we are relying on LLM's at this point. It feels like the whole economy is just doing what I am doing but with billions of dollars.
Another thing that I feel like is that both young and elderly people are really much like the same in vibe-coding. (Yes specs can help but LLM's are still autocorrect on steroids), I feel like we are both forsaking the junior developers and also forsaking the expertise created by senior developers as we replace it with these LLM's
It's just exposing the dumb
Aggressively red-teaming your own work with LLMs is a good habit to get into. Prompts like “I’ve been told me to find the flaws in this argument/presentation/code file/etc.”. Doesn’t save any time, but is pretty educational, as long as you go back and forth a lot. It can fall into a style disagreement loop between two equivalent code blocks as it will try to find something wrong if instructed to do so, which is interesting.
If you don’t do this constantly, LLMS can certainly lead you right down the Dunning-Kruger path (though that’s a big oversimplification of a whole collection of psychological features from idee fixe to narcissism to fear of failure/criticism). If you really work at getting the LLM into the proper state it will happily rip your work apart in a rather cruel and indifferent manner, like an unsympathetic corporate gatekeeper who relishes exposing your flaws in a public setting. Debate club is another tactic that’s a bit less harsh, you have the LLM flip back and forth between defense and prosecution of your work.
I think this should be the default setting, but it doesn’t encourage engagement, the average customer will think the LLM is a mean jerk if it starts off like that.
[dead]
[flagged]
[dead]
[dead]
[dead]
[flagged]
Skill issue.
God damn this nail gun is making me lazy, its like I don't have to swing the hammer any more...
Most people, given a nail gun, cant build a house, thats where the skill is...
Im not someone whose validation came from the lines of code, but from the resulting working system.
I enjoy using and orchestrating agents a lot to build software, but have never really had the desire to replace my writing with LLMs. I don't write a whole whole lot, so maybe I just don't have enough writing to do to make it appealing, but my emails, blog posts, comments, whatever are the last thing I want to automate. Not only because it's less personal, but because I'm so tired of reading AI cruft myself. So much more text in tickets than there needs to be, for example.
And how are people forgetting to code by using LLMs? Do they just mean they forgot the syntax of a particular language? Or forgot how to architect features or how the development lifecycle works?
I've mostly used LLMs to build more complex things that would have been a lot to manage previously, or to build something completely new and learn how it works. I feel like I've only become a better engineer (and programmer too) because of LLMs.