The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.
Just work on more ambitious projects?
I miss hard thinking people.
Cognitive debt lies ahead for all of us.
I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".
I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.
The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.
Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.
These are also tasks the AI can succeed at rather trivially.
Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.
Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.
Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.
...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.
These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.
reading this made me realize i used to actually think hard about bugs and design tradeoffs because i had no choice
That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.
Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.
With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.
I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.
The ziphead era of coding is over. I'll miss it too.
yes but you solved problems already solved by someone else. how about something that hasn't been solved, or yet even noticed? that gives the greatest satisfaction
Man, this resonates SO MUCH with me. I have always loved being confronted with a truly difficult problem. And I always had that (obviously misguided, but utterly motivating) feeling that, with enough effort, no problem could ever resist me. That it was just a matter of grinding a bit further, a bit longer.
This is why I am so deeply opposed to using AI for problem solving I suppose: it just doesn’t play nice with this process.
Would like to follow your blog, is there an rss feed?
If you feel this way, you arent using AI right.
For me, Claude, Suno, Gemini and AI tools are pure bliss for creation, because they eliminate the boring grunt work. Who cares how to implement OAuth login flow, or anything that has been done 1000 times?
I do not miss doing grunt work!
Honestly I think the "thinking hard" part is still there, it just shifted. Instead of thinking hard about implementation details, I'm now spending more time thinking about what I actually want to build and why.
The debugging experience also changed - when code doesnt work, you can't just step through the logic you wrote because you didn't write it. You have to understand someone else's (the AI's) logic. That's still thinking hard, just differently.
What I miss more is the satisfaction of solving a tricky problem myself. Sometimes I deliberately don't use AI for stuff just to get that feeling back.
Every day man... Thinking hard on something is a conscious choice.
Skill issue
If you don't have to think, then what you're building isn't really news worthy.
So, we have an inflation of worthless stuff being done.
Rich Hickey and the Clojure folks coined the term Hammock Driven Development. It was tongue in cheek but IMO it is an ideal to strive towards.
Maybe this is just me, but I don't miss thinking so much. I personally quite like knowing how to do things and being able to work productively.
For me it's always been the effort that's fun, and I increasingly miss that. Today it feels like I'm playing the same video game I used to enjoy with all the cheats on, or going back to an early level after maxing out my character. In some ways the game play is the same, same enemies, same map, etc, but the action itself misses the depth that comes from the effort of playing without cheats or with a weaker character and completing the stage.
What I miss personally is coming up with something in my head and having to build it with my own fingers with effort. There's something rewarding about that which you don't get from just typing "I want x".
I think this craving for effort is a very human thing to be honest... It's why we bake bread at home instead just buying it from a locally bakery that realistically will be twice as good. The enjoyment comes from the effort. I personally like building furniture and although my furniture sucks compared to what you might be able buy at store, it's so damn rewarding to spend days working on something then having a real physical thing that you can use that you build from hand.
I've never thought of myself as someone who likes the challenge of coding. I just like building things. And I think I like building things because building things is hard. Or at least it was.
You don't have to miss it, buy a differential equation book and do one per day. Play chess on hard mode. I mean there's so many ways to make yourself think hard daily, this makes no sense.
It's like saying I miss running. Get out and run then.
“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”
> I have tried to get that feeling of mental growth outside of coding
A few years before this wave of AI hit, I got promoted into a tech lead/architect role. All of my mental growth since then has been learning to navigate office politics and getting the 10k ft view way more often.
I was already telling myself "I miss thinking hard" years before this promotion. When I build stuff now, I do it with a much clearer purpose. I have sincerely tried the new tools, but I'm back to just using google search if anything at all.
All I did was prove to myself the bottleneck was never writing code, but deciding why I'm doing anything at all. If you want to think so hard you stay awake at night, try existential dread. It's an important developmental milestone you'd have been forced to confront anyway even 1000 years ago.
My point is, you might want to reconsider how much you blame AI.
At the day job there was a problem with performance loading data in an app.
7 months later waffling on it on and off with and without ai I finally cracked it.
Author is not wrong though, the number of times i hit this isnt as often since ai. I do miss the feeling though
I think AI didn't do this. Open source, libraries, cloud, frameworks and agile conspired to do this.
Why solve a problem when you can import library / scale up / use managed kuberneted / etc.
The menu is great and the number of problems needing deep thought seems rare.
There might be deep thought problems on the requirements side of things but less often on the technical side.
author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.
i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).
of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.
but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:
1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.
Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.
Dude, I know you touched on this but seriously. Just don't use AI then. It's not hard, it's your choice to use it or not. It's not even making you faster, so the pragmatism argument doesn't really work well! This is a totally self inflicted problem that you can undo any time you want.
I specifically spend my evenings reading Hegel and other hard philosophy as well as writing essays just to force myself to think hard
I think I miss my thinking..
Why not think hard about what to build instead of how to build it?
There's an irony here -- the same tools that make it easy to skim and summarize can also be used to force deeper thinking. The problem isn't the tools, it's the defaults.
I've found that the best way to actually think hard about something is to write about it, or to test yourself on it. Not re-read it. Not highlight it. Generate questions from the material and try to answer them from memory.
The research on active recall vs passive review is pretty clear: retrieval practice produces dramatically better long-term retention than re-reading. Karpicke & Blunt (2011) showed that practice testing outperformed even elaborative concept mapping.
So the question isn't whether AI summarizers are good or bad -- it's whether you use them as a crutch to avoid thinking, or as a tool to compress the boring parts so you can spend more time on the genuinely hard thinking.
Great, so does that mean that it is time to vibe code our own alternatives of everything such as the Linux kernel because the AI is sure 'smarter' than all of us?
Seen a lot of DIY vibe coded solutions on this site and they are just waiting for a security disaster. Moltbook being a notable example.
That was just the beginning.
To me thinking hard involved the following steps-
1. Take a pen and paper.
2. Write down what we know.
3. Write down where we want to go.
4. Write down our methods of moving forward.
5. Make changes to 2, using 4, and see if we are getting closer to 3. And course correct based on that.
I still do it a lot. LLM's act as assist. Not as a wholesale replacement.
"Before you read this post, ask yourself a question: When was the last time you truly thought hard? ... a) All the time. b) Never. c) Somewhere in between."
What?
Why not find a subfield that is more difficult and requires some specialization then?
I think hard all the time, AI can only solve problems for me that don't require thinking hard. Give it anything more complex and it's useless.
I use AI for the easy stuff.
Instant upvote for a Philiip Mainlander quote at the end. He's the OG "God is Dead" guy and Nietzsche was reacting (very poorly) to Mainlander and other pessimists like Schopenhauer when he followed up with his own, shittier version of "god is dead"
Please read up on his life. Mainlander is the most extreme/radical Philosophical Pessimist of them all. He wrote a whole book about how you should rationally kill yourself and then he killed himself shortly after.
https://en.wikipedia.org/wiki/Philipp_Mainl%C3%A4nder
https://dokumen.pub/the-philosophy-of-redemption-die-philoso...
Max Stirner and Mainlander would have been friends and are kindred spirits philosophically.
https://en.wikipedia.org/wiki/Bibliography_of_philosophical_...
> Yes, I blame AI for this.
Just don't use it. That's always an option. Perhaps your builder doesn't actually benefit from an unlimited runway detached from the cost of effort.
> I tried getting back in touch with physics, reading old textbooks. But that wasn’t successful either. It is hard to justify spending time and mental effort solving physics problems that aren’t relevant or state-of-the-art
I tried this with physics and philosophy. I think i want to do a mix of hard but meaningful. For academic fields like that its impossible for a regular person to do as a hobby. Might as well just do puzzles or something.
I mean I spent most of my career been being pressured to move from type 3 to any one of the other 2 so I don't blame AI for this (it doesn't help, though, especially if you delegate to much to it).
man, setting up worktrees for parallelized agentic coding is hard, setting up containerized worktrees is hard so you can run with dangerous permissions on without nuking host system
deciding whether to use that to work on multiple features on the same code base, or the same feature in multiple variations is hard
deciding whether to work on a separate project entirely while all of this is happening is hard and mentally taxing
planning all of this up for a few hours and watching it go all at once autonomously is satisfying!
Every time I try to use LLMs for coding, I completely lose touch with what it's doing, it does everything wrong and it can't seem to correct itself no matter how many times I explain. It's so frustrating just trying to get it to do the right thing.
I've resigned to mostly using it for "tip-of-my-tongue" style queries, i.e. "where do I look in the docs". Especially for Apple platforms where almost nothing is documented except for random WWDC video tutorials that lack associated text articles.
I don't trust LLMs at all. Everything they make, I end up rewriting from scratch anyway, because it's always garbage. Even when they give me ideas, they can't apply them properly. They have no standards, no principle. It's all just slop.
I hate this. I hate it because LLMs give so many others the impression of greatness, of speed, and of huge productivity gains. I must look like some grumpy hermit, stuck in their ways. But I just can't get over how LLMs all give me the major ick. Everything that comes out of them feels awful.
My standards must be unreasonably high. Extremely, unsustainably high. That must also be the reason I hardly finish any projects I've ever started, and why I can never seem to hit any deadlines at work. LLMs just can't reach my exacting, uncompromising standards. I'm surely expecting far too much of them. Far too much.
I guess I'll just keep doing it all myself. Anything else really just doesn't sit right.
another AI blame/praise/adapt.. you definitely didn't think hard about this one, did you
The article is interesting. I don't know how I feel about it, though I'm both a user of AI (no choice anymore in the current job environment) and vaguely alarmed by it; I'm in the camp of those who fear for the future of our profession, and I know the counterarguments but I'm not convinced.
A couple of thoughts.
First, I think the hardness of the problems most of us solve is overrated. There is a lot of friction, tuning things, configuring things right, reading logs, etc. But are the problems most of us are solving really that hard? I don't think so, except for those few doing groundbreaking work or sending rockets to space.
Second, even thinking about easier problems is good training for the mind. There's that analogy that the brain is a "muscle", and I think it's accurate. If we always take the easy way out for the easier problems, we don't exercise our brains, and then when harder problems come up what will we do?
(And please, no replies of the kind "when portable calculators were invented...").
I get it and somehow also agree with the division (thinker/builder) but I feel this is only the representation of a new society where less humans are necessary to think deeply. No offense here, it's just my own unsatisfacted brain trying to adapt to a whole new era.
I refer to it as "Think for me SaaS", and it should be avoided like the plague. Literally, it will give your brain a disease we haven't even named yet.
It's as if I woke up in a world where half of resturaunts worldwide started changing their name to McDonalds and gaslighting all their customers into thinking McDonalds is better than their "from scratch" menu.
Just dont use these agentic tools, they legitimately are weapons who's target is your brain. You can ship just as fast with autocomplete and decent workflows, and you know it.
Its weird, I dont understand why any self respecting dev would support these companies. They are openly hostile about their plans for the software industry (and many other verticles).
I see it as a weapon being used by a sect of the ruling class to diminsh the value of labor. While im not confident they'll be successful, I'm very disappointed in my peers that are cheering them on in that mission. My peers are obviously being tricked by promises of being able join that class, but that's not what's going to happen.
You're going to lose that thinking muscle and therefor the value of your labor is going to be directly correlated to the quantity and quality of tokens you can afford (or be given, loaned!?)
Be wary!!!
I am one of those junior software developers who always struggled with starting their own projects. Long story short, I realized that my struggle stems from my lack of training in open-ended problems, where there are many ways to go about solving something, and while some ways are better than others, there's no clear cut answer because the tradeoffs may not be relevant with the end goal.
I realized that when a friend of mine gave me Factorio as a gift last Christmas, and I found myself facing the exact same resistance I'm facing while thinking about working on my personal projects. To be more specific, it's a fear and urge of closing the game and leaving it "for later" the moment I discover that I've either done something wrong or that new requirements have been added that will force me to change the way my factories connect with each other (or even their placement). Example: Tutorial 4 has the players introduced to research and labs, and this feeling appears when I realize that green science requires me to introduce all sorts of spaghetti just to create the mats needed for green science!
So I've done what any AI user would do and opted to use chatGPT to push through the parts where things are either overwhelming, uncertain, too open-ended, or everything in between. The result works, because the LLM has been trained to Factorio guides, and goes as far as suggesting layouts to save myself some headache!
Awesome, no? Except all I've done is outsource the decision of how to go about "the thing" to someone else. And while true, I could've done this even before LLM's by simply watching a youtube video guide, the LLM help doesn't stop there: It can alleviate my indecisiveness and frustration with dealing with open-ended problems for personal projects, can recommend me project structure, can generate a bullet pointed lists to pretend that I work for a company where someone else creates the spec and I just follow it step by step like a good junior software engineer would do.
And yet all I did just postponed the inevitable exercise of a very useful mental habit: To navigate uncertainty, pause and reflect, plan, evaluate a trade-off or 2 here and there. And while there are other places and situations where I can exercise that behavior, the fact remains that my specific use of LLM removed that weight off my shoulders. I became objectively someone who builds his project ideas and makes progress in his Factorio playthrough, but the trade-off is I remain the same person who will duck and run the moment resistance happens, and succumb to the urge of either pushing "the thing" for tomorrow or ask chatGPT for help.
I cannot imagine how someone would claim that removing an exercise from my daily gym visit will not result in weaker muscles. There are so many hidden assumptions in such statements, and an excessive focus of results in "the new era where you should start now or be left behind" where nobody's thinking how this affects the person and how they ultimately function in their daily lives across multiple contexts. It's all about output, output, output.
How far are we from the day where people will say "well, you certainly don't need to plan a project, a factory layout, or even decide, just have chatGPT summarize the trade-offs, read the bullet points, and choose". We're off-loading portion of the research AND portion of the execution, thinking we'll surely be activating the neurosynapses in our brains that retains habits, just like someone who lifts 50% lighter weights at the gym will expect to maintain muscle mass or burn fat.
In reality, us thinkers will have to find other things to think about. Maybe not right now, but in the not-too-distant future we'll have to find other things that make us think and scratch that bit of our brains that need itching from learning new stuff and thinking hard about it.
It might be difficult to figure out what that is, and some folks will fail at it. It might not be code though.