I don't get it.
I think just as hard, I type less. I specify precisely and I review.
If anything, all we've changed is working at a higher level. The product is the same.
But these people just keep mixing things up like "wow I got a ferrari now, watch it fly off the road!"
Yeah so you got a tools upgrade; it's faster, it's more powerful. Keep it on the road or give up driving!
We went from auto completing keywords, to auto completing symbols, to auto completing statements, to auto completing paragraphs, to auto completing entire features.
Because it happened so fast, people feel the need to rename programming every week. We either vibe coders now, or agentic coders or ... or just programmers hey. You know why? I write in C, I get machine code, I didn't write the machine code! It was all an abstraction!
Oh but it's not the same you say, it changes every time you ask. Yes, for now, it's still wonky and janky in places. It's just a stepping stone.
Just chill, it's programming. The tools just got even better.
You can still jump on a camel and cross the desert in 3 days. Have at it, you risk dying, but enjoy. Or you can just rent a helicopter and fly over the damn thing in a few hours. Your choice. Don't let people tell you it isn't travelling.
We're all Linus Torvalds now. We review, we merge, we send back. And if you had no idea what you were doing before, you'll still have no idea what you're doing today. You just fat-finger less typos today than ever before.
Thinking hard and fast with positive results is like a drug, ah those were good and rewarding days in my past, would jump back into that work framework any time ( that was running geological operations in an unusually agile oil exploration programme )
You know, I was expecting what the post would say and was prepared to dunk on it and just tell them to stop using ai then, but the builder/thinker division they presented got me thinking. How ai/vibe coding fulfills the builder, not the thinker, made me realize that I'm basically 100% thinker, 0% builder, and that's why I don't really care at all about ai for coding.
I'll spend years working on a from scratch OS kernel or a vulkan graphics engine or whatever other ridiculous project, which never sees the light of day, because I just enjoy the thinking / hard work. Solving hard problems is my entertainment and my hobby. It's cool to eventually see results in those projects, but that's not really the point. The point is to solve hard problems. I've spent decades on personal projects that nobody else will ever see.
So I guess that explains why I see all the ai coding stuff and pretty much just ignore it. I'll use ai now as an advanced form of google, and also as a last ditch effort to get some direction on bugs I truly can't figure out, but otherwise I just completely ignore it. But I guess there's other people, the builders, where ai is a miraculous thing and they're going to crazy lengths to adopt it in every workflow and have it do as much as possible. Those 'builder' types of people are just completely different from me.
It’s possible to be both.
The last time I had to be a Thinker was because I was in Builder mode. I’ve been trying to build an IoT product but I’ve been wayyyy over my head because I knew maybe 5% of what I needed to be successful. So I would get stuck many, many times, for days or weeks at a time.
I will say though that AI has made the difference in the last few times I got stuck. But I do get more enjoyment out of Building than Thinking, so I embrace it.
As someone who's been coding for several decades now (i.e. I'm old), I find the current generation of AI tools very ... freeing.
As an industry, we've been preaching the benefits of running lots of small experiments to see what works vs what doesn't, try out different approaches to implementing features, and so on. Pre-AI, lots of these ideas never got implemented because they'd take too much time for no definitive benefit.
You might spend hours thinking up cool/interesting ideas, but not have the time available to try them out.
Now, I can quickly kick off a coding agent to try out any hare-brained ideas I might come up with. The cost of doing so is very low (in terms of time and $$$), so I get to try out far more and weirder approaches than before when the costs were higher. If those ideas don't play out, fine, but I have a good enough success rate with left-field ideas to make it far more justifiable than before.
Also, it makes playing around with one-person projects a lot practical. Like most people with partner & kids, my down time is pretty precious, and tends to come in small chunks that are largely unplannable. For example, last night I spent 10 minutes waiting in a drive-through queue - that gave me about 8 minutes to kick off the next chunk of my one-person project development via my phone, review the results, then kick off the next chunk of development. Absolutely useful to me personally, whereas last year I would've simply sat there annoyed waiting to be serviced.
I know some people have an "outsourcing Lego" type mentality when it comes to AI coding - it's like buying a cool Lego kit, then watching someone else assemble it for you, removing 99% of the enjoyment in the process. I get that, but I prefer to think of it in terms of being able to achieve orders of magnitude more in the time I have available, at close to zero extra cost.
I think the article has a point. There seem to be two reactions among senior engineers atound me these days.
On one side, there are people who have become a bit more productive. They are certainly not "10x," but they definitely deliver more code. However, I do not observe a substantial difference in the end-to-end delivery of production-ready software. This might be on me and my lack of capacity to exploit the tools to their full extent. But, iterating over customer requirements, CI/CD, peer reviews, and business validation takes time (and time from the most experienced people, not from the AI).
On the other hand, soemtimes I observe a genuine degradation of thinking among some senior engineers (there aren’t many juniors around, by the way). Meetings, requirements, documents, or technology choices seem to be directly copy/pasted from an LLM, without a grain of original thinking, many times without insight.
The AI tools are great though. They give you an answer to the question. But, many times making the correct question, and knowing when the answer is not correct is the main issue.
I wonder if the productivity boost that senior engineers actually need is to profit from the accumulated knowledge found in books. I know it is an old technology and it is not fashionable, but I believe it is mostly unexploited if you consider the whole population of engineers :D
I haven't reduced my thinking! Today I asked AI to debug an issue. It came with a solution that it was clearly correct, but it didn't explain why the code was in that state. I kept steering AI (which just wanted to fix) toward figuring out the why and it digged through git and github issue at some point,in a very cool way. And finally it pulled out something that made sense. It was defensive programming introduced to fix an issue somewhere else, which was also in turn fixed, so useless.
At that point an idea popped in my mind and I decided to look for similar patterns in the codebase, related to the change, found 3. 1 was a non bug, two were latent bugs.
Shipped a fix plus 2 fixes for bugs yet to be discovered.
I'm using LLMs to code and I'm still thinking hard. I'm not doing it wrong: I think about design choices: risks, constraints, technical debt, alternatives, possibilities... I'm thinking as hard as I've ever done.
One thing this discussion made me realize is that "thinking hard" might not be a single mode of thinking.
In grad school, I had what I'd call the classic version. I stayed up all night mentally working on a topology question about turning a 2-torus inside out. I already knew you can't flip a torus inside out in ordinary R^3 without self-intersection. So I kept moving and stretching the torus and the surrounding space in my head, trying to understand where the obstruction actually lived.
Sometime around sunrise, it clicked that if you allow the move to go through infinity(so effectively S^3), the inside/outside distinction I was relying on just collapses, and the obstruction I was visualizing dissolves. Birds were chirping, I hadn't slept, and nothing useful came out of it, but my internal model of space felt permanently upgraded. That's clearly "thinking hard" in the sense.
But there's another mode I've experienced that feels related but different. With a tough Code Golf problem, I might carry it around for a week. I'm not actively grinding on it the whole time, but the problem stays loaded in the background. Then suddenly, in the shower or on a walk, a compression trick or a different representation just clicks.
That doesn't feel "hard" moment to moment. It's more like keeping a problem resident in memory long enough for the right structure to surface.
One is concentrated and exhausting, the other is diffuse and slow-burning. They're different phenomenologically, but both feel like forms of deep engagement that are easy to crowd out.
This resonates with me, but I quit programming about a decade ago when we were moving from doing low level coding to frameworks. It became no longer about figuring out the actual problem, but figuring out how to get the framework to solve it and that just didn't work for me.
I do miss hard thinking, I haven't really found a good alternative in the meantime. I notice I get joy out of helping my kids with their, rather basic, math homework, so the part of me that likes to think and solve problems creatively is still there. But it's hard to nourish in today's world I guess, at least when you're also a 'builder' and care about efficiency and effectiveness.
I will never not be upset at my fellow engineers for selling out the ONE thing that made us valuable and respected in the marketplace and trying to destroy software engineering as a career because "Claude Code go brrrrrr" basically.
It's like we had the means for production and more or less collectively decided "You know what? Actually, the bourgeoisie can have it, sure."
I've found that it's often useful to spend the time thinking about the way I would architect the code (down to a fair level of minutia) before letting the agent have a go.
That way my 'thinker' is satiated and also challenged - Did the solution that my thinker came up with solve the problem better than the plan that the agent wrote?
Then either I acknowledge that the agent's solution was better, giving my thinker something to chew on for the next time; or my solution is better which gives the thinker a dopamine hit and gives me better code.
I wrote a blog about this as well
Hard Things in Computer Science, and AI Aren't Fixing Them
When I wrote nimja's template inheritance. I thought about it multiple days, until, during a train commute, it made click and I had to get out my notebook and write it, directly in the train. Then some month later I found out, I had the same bug that jinja2 had fixed years ago. So I felt kinda like a brothers in hard thinking :)
I feel tired working with AI much faster than I did when I used to code, dunno if it's just that I don't really need to think much at all other than keep in mind the broad plan and have an eye out if a red flag of the wrong direction shows in the transcript, don't even bother reading the code anymore since Opus 4.5 I haven't felt the need to.
Manually coding engaged my brain much more and somehow was less exhausting, kinda feels like getting out of bed and doing something vs lazing around and ending up feel more tired despite having to do less.
Seems like a lot of people use AI to code in their private commercial IP and products. How are people not concerned with the fact that all these ai companies have the source code to everything? Your just helping them destroy your job. Code is not worthless, you cannot easily duplicate any complex project with equal features, quality, and stability.
To be honest, I do not quite understand the author's point. If he believes that agentic coding or AI has negative impact on being a thinker, or prevent him from thinking critically, he can simply stop using them.
Why blame these tools if you can stop using them, and they won't have any effect on you?
In my case, my problem was often overthinking before starting to build anything. Vibe coding rescued me from that cycle. Just a few days ago, I used openclaw to build and launch a complete product via a Telegram chat. Now, I can act immediately rather than just recording an idea and potentially getting to it "someday later"
To me, that's evolutional. I am truly grateful for the advancement of AI technology and this new era. Ultimately, it is a tool you can choose to use or not, rather than something that prevents you from thinking more.
"I don't want to have to write this for the umpteenth time" -- Don't let it even reach a -teenth. Automate it on the 2nd iteration. Or even the 1st if you know you'll need it again. LLMs can help with this.
Software engineers are lazy. The good ones are, anyway.
LLMs are extremely dangerous for us because it can easily become a "be lazy button". Press it whenever you want and get that dopamine hit -- you don't even have to dive into the weeds and get dirty!
There's a fine line between "smart autocomplete" and "be lazy button". Use it to generate a boilerplate class, sure. But save some tokens and fill that class in yourself. Especially if you don't want to (at your own discretion; deadlines are a thing). But get back in those weeds, get dirty, remember the pain.
We need to constantly remind ourselves of what we are doing and why we are doing it. Failing that, we forget the how, and eventually even the why. We become the reverse centaur.
And I don't think LLMs are the next layer of abstraction -- if anything, they're preventing it. But I think LLMs can help build that next layer... it just won't look anything like the weekly "here's the greatest `.claude/.skills/AGENTS.md` setup".
If you have to write a ton of boilerplate code, then abstract away the boilerplate in code (nondeterminism is so 2025). And then reuse that abstraction. Make it robust and thoroughly tested. Put it on github. Let others join in on the fun. Iterate on it. Improve it. Maybe it'll become part of the layers of abstraction for the next generation.
> the number of times I truly ponder a problem for more than a couple of hours has decreased tremendously
Isn't that a good thing? If you're stuck on the same problem forever, then you're not going to get past it and never move on to the next thing... /shrug
I miss the thrill of running through the semi-parched grasslands and the heady mix of terror triumph and trepidation as we close in on our meal for the week.
I miss entering flow state when coding. When vibe coding, you are in constant interruption and only think very shallow. I never see anyone enter flow state when vibe coding.
"Oh no, I am using a thing that no one is forcing me to use, and now I am sad".
Just don't use AI. The idea that you have ship ship ship 10X ship is an illusion and a fraud. We don't really need more software
I think harder because of AI.
I have to think more rigorously. I have to find ways to tie up loose ends, to verify the result efficiently, to create efficient feedback loops and define categorical success criteria.
I've thought harder about problems this last year than I have in a long time.
I’d been feeling this until quite literally yesterday, where I sort of just forced myself to not touch an AI and grappled with the problem for hours. Got myself all mixed up with trig and angles until I got a headache and decided to back off a lot of the complexity. I doubt I got everything right, I’m sure I could’ve had a solution with near identical outputs using an AI in a fraction of the time.
But I feel better for not taking the efficient way. Having to be the one to make a decision at every step of the way, choosing the constraints and where I cut my losses on accuracy, I think has taught me more about the subject than even reading literature would’ve directly stated.
People here seem to be conflating thinking hard and thinking a lot.
Most examples mentioned of “thinking hard” in the comments sound like they think about a lot of stuff superficially instead one particular problem deeply, which is what OP is referring to.
I've missed the same even since before AI because I've done far too much work that's simple but time intensive. It's frustrating, and I miss problems that keep me up all night.
Reverse engineering is imo the best way of getting the experience of pushing your thinking in a controlled way, at least if you have the kind of personality where you are stubborn in wanting to solve the problem.
Go crack an old game or something!
My solution has been to lean into harder problems - even as side projects, if they aren't available at work.
I too am an ex-physcist used to spending days thinking about things, but programming is a gold mine as it is adjacent to computer science. You can design a programming language (or improve an existing one), try to build a better database (or improve an existing one), or many other things that are quite hard.
The LLM is a good rubber duck for exploring the boundaries of human knowledge (or at least knowledge common enough to be in its training set). It can't really "research" on its own, and whenever you suggest something novel and plausable it gets sycophantic, but it can help you prototype ideas and implementation strategies quite fast, and it can help you explore how existing software works and tackles similar problems (or help you start working on an existing project).
I see the current generation of AI very much as a thing in between. Opus 4.5 can think and code quite well, but it cannot do these "jumps of insight" yet. It also struggles with straightforward, but technically intricate things, where you have to max out your understanding of the problem.
Just a few days ago, I let it do something that I thought was straightforward, but it kept inserting bugs, and after a few hours of interaction it said itself it was running in circles. It took me a day to figure out what the problem was: an invariant I had given it was actually too strong, and needed to be weakened for a special case. If I had done all of it myself, I would have been faster, and discovered this quicker.
For a different task in the same project I used it to achieve a working version of something in a few days that would have taken me at least a week or two to achieve on my own. The result is not efficient enough for the long term, but for now it is good enough to proceed with other things. On the other hand, with just one (painful) week more, I would have coded a proper solution myself.
What I am looking forward to is being able to converse with the AI in terms of a hard logic. That will take care of the straightforward but technically intricate stuff that it cannot do yet properly, and it will also allow the AI to surface much quicker where a "jump of insight" is needed.
I am not sure what all of this means for us needing to think hard. Certainly thinking hard will be necessary for quite a while. I guess it comes down to when the AIs will be able to do these "jumps of insight" themselves, and for how long we can jump higher than they can.
I believe the article is wrong in so many ways.
If you think too much you get into dead ends and you start having circular thoughts, like when you are lost in the desert and you realise you are in the same place again after two hours as you have made a great circle(because one of your legs is dominant over the other).
The thinker needs feedback on the real world. It needs constant testing of hypothesis on reality or else you are dealing with ideology, not critical thinking. It needs other people and confrontation of ideas so the ideas stay fresh and strong and do not stagnate in isolation and personal biases.
That was the most frustrating thing before AI, a thinker could think very fast, but was limited in testing by the ability to build. Usually she had to delegate it to people that were better builders, or else she had to be builder herself, doing what she hates all the time.
You can't change the world, you can change yourself. Many people don't like change. So, people get frustrated when the world inevitably changes and they fail to adapt. It's called getting older. Happens to us all.
I'm not immune to that and I catch myself sometimes being more reluctant to adapt. I'm well aware and I actively try to force myself to adapt. Because the alternative is becoming stuck in my ways and increasingly less relevant. There are a lot of much younger people around me that still have most of their careers ahead of them. They can try to whine about AI all they want for the next four decades or so but I don't think it will help them. Or they can try to deal with the fact that these tools are here now and that they need to learn to adapt to them whether they like it or not. And we are probably going to see quite some progress on the tool front. It's only been 3 years since ChatGPT had its public launch.
To address the core issue here. You can use AI or let AI use you. The difference here is about who is in control and who is setting the goals. The traditional software development team is essentially managers prompting programmers to do stuff. And now we have programmers prompting AIs to do that stuff. If you are just a middle man relaying prompts from managers to the AI, you are not adding a lot of value. That's frustrating. It should be because it means apparently you are very replaceable.
But you can turn that around. What makes that manager the best person to be prompting you? What's stopping them from skipping that entirely? Because that's your added value. Whatever you are good at and they are not is what you should be doing most of your time. The AI tools are just a means to an end to free up more time for whatever that is. Adapting means figuring that out for yourself and figuring out things that you enjoy doing that are still valuable to do.
There's plenty of work to be done. And AI tools won't lift a finger to do it until somebody starts telling them what needs doing. I see a lot of work around me that isn't getting done. A lot of people are blind to those opportunities. Hint: most of that stuff still looks like hard work. If some jerk can one shot prompt it, it isn't all that valuable and not worth your time.
Hard work usually involves thinking hard, skilling up, and figuring things out. The type of stuff the author is complaining he misses doing.
I generally feel the same. But in addition, I also enjoy the pure act of coding. At least for me that’s another big part why I feel left behind with all this Agent stuff.
I feel like AI has given me the opportunity to think MORE, not less. I’m doing so much less mindless work, spending most of my efforts critically analyzing the code and making larger scale architectural decisions.
The author says “ Even though the AI almost certainly won't come up with a 100% satisfying solution, the 70% solution it achieves usually hits the “good enough” mark.”
The key is to keep pushing until it gets to the 100% mark. That last 30% takes multiples longer than the first 70%, but that is where the satisfaction lies for me.
I had similar thoughts recently. I wouldn't consider myself "the thinker", but I simply missed learning by failure. You almost don't fail anymore using AI. If something fails, it feels like it's not your fault but the AI messed up. Sometimes I even get angry at the AI for failing, not at myself. I don't have a solution either, but I came up with a guideline on when and how to use AI that has helped me to still enjoy learning. I'm not trying to advertise my blog and you don't need to read it, the important part is the diagram at the end of "Learning & Failure": https://sattlerjoshua.com/writing/2026-02-01-thoughts-on-ai-.... In summary, when something is important and long-term, I heavily invest into understanding and use an approach that maximizes understanding over speed. Not sure if you can translate it 100% to your situation but maybe it helps to have some kind of guideline, when to spend more time thinking instead of directly using and AI to get to the solution.
I miss this too, I have had those moments of reward where something works and I want to celebrate. It's missing too for me.
With AI the pros outweigh the cons at least at the moment with what we collectively have figured out so far. But with that everyday I wonder if it's possible now to be more ambitious than ever and take on much bigger problem with the pretend smart assistant.
It seems like what you miss is actually a stable cognitive regime built around long uninterrupted internal simulation of a single problem. This is why people play strategy video games.
I've had the completely opposite experience as somebody that also likes to think more than to build: LLMs take much of the legwork of actually implementing a design, fixing trivial errors etc. away from me and let me validate theories much more quickly than I could do by myself.
More importantly, thinking and building are two very different modes of operating and it can be hard to switch at moment's notice. I've definitely noticed myself getting stuck in "non-thinking building/fixing mode" at times, only realizing that I've been making steady progress into the wrong direction an hour or two in.
This happens way less with LLMs, as they provide natural time to think while they churn away at doing.
Even when thinking, they can help: They're infinitely patient rubber ducks, and they often press all the right buttons of "somebody being wrong on the Internet" too, which can help engineers that thrive in these kinds of verbal pro/contra discussions.
Great article. Moment I finished reading this article, I thought of my time in solving a UI menu problem with lot of items in it and algorithm I came up with to solve for different screen sizes. It took solid 2 hrs of walking and thinking. I still remember how I was excited when I had the feeling of cracking the problem. Deep thinking is something everyone has it within and it varies how fast you can think. But we all got it with right environment and time we all got it in us. But thats long time ago. Now I always off load some thinking to AI. it comes up with options and you just have to steer it. By time it is getting better. Just ask it you know. But I feel like it is good old days to think deep by yourself. Now I have a partner in AI to think along with me. Great article.
Good highlight of the struggle between Builder and Thinker, I enjoyed the writing. So why not work on PQC? Surely you've thought about other avenues here as well.
If you're looking for a domain where the 70% AI solution is a total failure, that's the field. You can't rely on vibe coding because the underlying math, like Learning With Errors (LWE) or supersingular isogeny graphs, is conceptually dense and hasn't been commoditized into AI training data yet. It requires that same 'several-day-soak' thinking you loved in physics, specifically because we're trying to build systems that remain secure even against an adversary with a quantum computer. It’s one of the few areas left where the Thinker isn't just a luxury, but a hard requirement for the Builder to even begin.
If you are thinking hard I think you are software engineering wrong. Even before AI. As an industry all the different ways of doing things have already played out. Even doing big reactors or performance optimizations often can not be 100% predicted in their effectiveness. You will want to just go ahead and implement these things over spending more time thinking. And as AI gets stronger the just try a bunch of approaches will beat the think hard approach by an even bigger margin.
The sampling rate we use to take input information is fixed. And we always find a way to work with the sampled information, no matter if the input information density is high or low.
We can play a peaceful game and a intense one.
Now, when we think, we can always find a right level of abstract to think on. Decades ago a programmer thought with machine codes, now we think with high level concepts, maybe towards philosophy.
A good outcome always requires hard thinking. We can and we WILL think hard at a appropriate level.
The answer to this is to shift left into product/design.
Sure, I'm doing less technical thinking these days. But all the hard thinking is happening on feature design.
Good feature design is hard for AI. There's a lot of hidden context: customer conversations, unwritten roadmaps, understanding your users and their behaviour, and even an understanding of your existing feature set and how this new one fits in.
It's a different style of thinking, but it is hard, and a new challenge we gotta embrace imo.
Eventually I always get to a problem I can't solve by just throwing an LLM at it and have to go in and properly debug things. At that point knowing the code base helps a hell of a lot, and I would've been better off writing the entire thing by hand.
Many people here might be in a similar situation to me, but I took an online masters program that allowed for continuing education following completion of the degree. This has become one of my hobbies; I can take classes at my own expense, not worry about my grades, and just enjoy learning. I can push myself as much as I want and since the classes are hard, just completing 1 assignment is enough to force me to "think". Just sharing my experience for people who might be looking for ways to challenge themselves intellectually.
I think what plagues a lot of pure STEM types in this tumultuous period of AI (or "AI") is that they've spent a majority of their lives mulling over some problem until they've worked out every possible imperfection, and once they've achieved something they consider close to that level of perfection, that's when they say they're done.
While this may be an unfair generalization, and apologies to those who don't feel this way, but I believe STEM types like the OP are used to problem solving that's linear in the sense that the problem only exists in its field as something to be solved, and once they figure it out, they're done. The OP even described his mentality as that of a "Thinker" where he received a problem during his schooling, mulled over it for a long time, and eventually came to the answer. That's it, next problem to crack. Their whole lives revolve around this process and most have never considered anything outside it.
Even now, despite my own healthy skepticism of and distaste for AI, I am forced to respect that AI can do some things very fast. People like the OP, used to chiseling away at a problem for days, weeks, months, etc., now have that throughput time slashed. They're used to the notion of thinking long and hard about a very specific problem and finally having some output; now, code modules that are "good enough" can be cooked up in a few minutes, and if the module works the problem is solved and they need to find the next problem.
I think this is more common than most people want to admit, going back to grumblings of "gluing libraries together" being unsatisfying. The only suggestion I have for the OP is to expand what you think about. There are other comments in this thread supporting it but I think a sea change that AI is starting to bring for software folks is that we get to put more time towards enhancing module design, user experience, resolving tech debt, and so on. People being the ones writing code is still very important.
I think there's more to talk about where I do share the OP's yearning and fears (i.e., people who weren't voracious readers or English/literary majors being oneshot by the devil that is AI summaries, AI-assisted reading, etc.) but that's another story for another time.
I've been writing C/C++/Java for 25 years and am trying to learn forex disciplined, risk managed forex trading, It's a whole new level of hard work/thinking.
Exactly what I've been thinking. outsourcing tasks and thinking of problems to AI just seems easier these days; and you still get to feel in charge because you're the one still giving instructions.
I knew this sort of thing would happen before it was popular. Accordingly:
Never have I ever used an LLM.
You were walking to your destination which was three miles away
You now have a bicycle which gets you there in a third of the time
You need to find destinations that are 3x as far away than before
If it's this easy to convince you to stop being creative, to stop putting in effort to think critically, then you don't deserve the fulfilment that creativity and critical thinking can give you. These vibe coding self pity articles are so bizarre.
This March 2025 post from Aral Balkan stuck with me:
https://mastodon.ar.al/@aral/114160190826192080
"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.
When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."