logoalt Hacker News

I miss thinking hard

916 pointsby jernestomgtoday at 3:54 AM507 commentsview on HN

Comments

frgturpwdtoday at 10:20 AM

It seems like what you miss is actually a stable cognitive regime built around long uninterrupted internal simulation of a single problem. This is why people play strategy video games.

rozumemtoday at 7:53 AM

I can relate to this. Coding satisifies my urge to build and ship and have an impact on the world. But it doesn't make me think hard. Two things which I've recently gravitated to outside of coding which make me think: blogging and playing chess.

Maybe I subconsciously picked these up because my Thinker side was starved for attention. Nice post.

ertucetintoday at 8:44 AM

It’s the journey, not the destination, but with AI it’s only the destination, and it takes all the joy.

Dr_Birdbraintoday at 4:39 AM

I think this problem existed before AI. At least in my current job, there is constant, unrelenting demand for fast results. “Multi-day deep thinking” sounds like an outrageous luxury, at least in my current job.

show 2 replies
ccppurcelltoday at 9:01 AM

In my experience, the so-called 1% are mostly just thinkers and researchers who have dedicated a lot more time from an earlier age to thinking and/or researching. There are a few geniuses out there but it's 1 in millions not in hundreds.

felipelallitoday at 11:29 AM

Me: I put your text into AI and ask it to summarize. We really do have a critical problem of mental laziness.

tevlitoday at 9:34 AM

Exactly what I've been thinking. outsourcing tasks and thinking of problems to AI just seems easier these days; and you still get to feel in charge because you're the one still giving instructions.

raincoletoday at 4:33 AM

I really don't believe AI allows you to think less hard. If it did, it would be amazing, but the current AI hasn't got to that capability. It forces you to think about different things at best.

martin1975today at 9:46 AM

I've been writing C/C++/Java for 25 years and am trying to learn forex disciplined, risk managed forex trading, It's a whole new level of hard work/thinking.

koakuma-chantoday at 5:06 AM

What a bizarre claim. If you can solve anything by thinking, why don't you become a scientist? Think of a theory that unites quantum physics and general relativity.

saturatedfattoday at 7:00 AM

I think for days at a time still.

I don’t think you can get the same satisfaction out of these tools if what you want to do is not novel.

If you are exploring the space of possibilities for which there are no clear solutions, then you have to think hard. Take on wildly more ambitious projects. Try to do something you don’t think you can do. And work with them to get there.

zepesmtoday at 9:36 AM

That's why i'm still pushing bytes on C64 demoscene (and recommend such a niche as a hobby to anyone). It's great for the sanity in modern ai-driven dev-world ;)

jbrooks84today at 11:57 AM

You are doing something wrong. Ai has not taken away thinking hard

muyuutoday at 9:45 AM

this also used to happened to me when I in a position that involved a lot of research earlier on and then after the product was a reality, and it worked, it tapered off to be small improvements and maintenance

I can imagine many positions work out this way in startups

it's important to think hard sometimes, even if it means taking time off to do the thinking - you can do it without the socioeconomic pressure of a work environment

moorebobtoday at 11:53 AM

My mindset last year: I am now a mentor to a junior developer

My mindset this year: I am an engineering manager to a team of developers

If the pace of AI improvement continues, my mindset next year will need to be: I am CEO and CTO.

I never enjoyed the IC -> EM transition in the workplace because of all the tedious political issues, people management issues and repetitive admin. I actually went back to being an IC because of this.

However, with a team of AI agents, there's less BS, and less holding me back. So I'm seeing the positives - I can achieve vastly more, and I can set the engineering standards, improve quality (by training and tuning the AI) and get plenty of satisfaction from "The Builder" role, as defined in the article.

Likewise I'm sure I would hate the CEO/CTO role in real life. However, I am adapting my mindset to the 2030s reality, and imagining being a CEO/CTO to an infinitely scalable team of Agentic EMs who can deliver the work of hundreds of real people, in any direction I choose.

How much space is there in the marketplace if all HN readers become CEOs and try to launch their own products and services? Who knows... but I do know that this is the option available to me, and it's probably wise to get ahead of it.

sbinneetoday at 6:41 AM

What OP wants to say is that they miss the process of thinking hard for days and weeks and one day this brilliant idea popping up on their bed before sleep. I lost my "thinking hard" process again too today at work against my pragmatism, or more precisely my job.

tbmtbmtbmtbmtbmtoday at 4:43 AM

Make sure you start every day with the type of confidence that would allow you to refer to yourself as an intellectual one-percenter

scionnitoday at 7:57 AM

I have a very similar background and a very similar feeling when i think of programming nowadays.

Personally, I am going deeper in Quantum Computing, hoping that this field will require thinkers for a long time.

fatfoxtoday at 8:10 AM

Just sit down and think hard. If it doesn’t work, think harder.

voidUpdatetoday at 9:21 AM

If you miss the experience of not using LLMs, then just... don't? Is someone forcing you to code with LLM help?

show 1 reply
dhananjayadrtoday at 7:07 AM

The author's point is, If you use AI to solve the problem and after the chat gives you the solution you say “oh yes, ok, I understand it, I can do it”(and no, you can’t do it).

Animatstoday at 4:59 AM

"Sometimes you have to keep thinking past the point where it starts to hurt." - Fermi

z3t4today at 6:53 AM

I always search the web, ask others, or read books in order to find a solution. When I do not find an answer from someone else, that's where I have to think hard.

show 1 reply
zatkintoday at 4:31 AM

I feel that AI doesn't necessarily replace my thinking, but actually helps to explore deeper - on my behalf - alternative considerations in the approach to solving a problem, which in turn better informs my thinking.

hpone91today at 9:03 AM

Just give Umineko a play/readthrough to get your deep thinking gray cells working again.

keithnztoday at 4:32 AM

I feel like I'm doing much nicer thinking now, I'm doing more systems thinking, not only that I'm iterating on system design a lot more because it is a lot easier to change with AI

sfinktoday at 4:59 AM

I definitely relate to this. Except that while I was in the 1% in university who thought hard, I don't think my success rate was that high. My confidence in the time was quite high, though, and I still remember the notable successes.

And also, I haven't started using AI for writing code yet. I'm shuffling toward that, with much trepidation. I ask it lots of coding questions. I make it teach me stuff. Which brings me to the point of my post:

The other day, I was looking at some Rust code and trying to work out the ownership rules. In theory, I more or less understand them. In practice, not so much. So I had Claude start quizzing me. Claude was a pretty brutal teacher -- he'd ask 4 or 5 questions, most of them solvable from what I knew already, and then 1 or 2 that introduced a new concept that I hadn't seen. I would get that one wrong and ask for another quiz. Same thing: 4 or 5 questions, using what I knew plus the thing just introduced, plus 1 or 2 with a new wrinkle.

I don't think I got 100% on any of the quizzes. Maybe the last one; I should dig up that chat and see. But I learned a ton, and had to think really hard.

Somehow, I doubt this technique will be popular. But my experience with it was very good. I recommend it. (It does make me a little nervous that whenever I work with Claude on things that I'm more familiar with, he's always a little off base on some part of it. Since this was stuff I didn't know, he could have been feeding me slop. But I don't think so; the explanations made sense and the the compiler agreed, so it'd be tough to get anything completely wrong. And I was thinking through all of it; usually the bullshit slips in stealthily in the parts that don't seem to matter, but I had to work through everything.)

yehoshuapwtoday at 7:58 AM

have a look at https://projecteuler.net/

for "Thinker" brain food. (it still has the issue of not being a pragmatic use of time, but there are plenty interesting enough questions which it at least helps)

emsigntoday at 11:55 AM

I miss hard thinking people.

stale-labstoday at 1:48 PM

Honestly I think the "thinking hard" part is still there, it just shifted. Instead of thinking hard about implementation details, I'm now spending more time thinking about what I actually want to build and why.

The debugging experience also changed - when code doesnt work, you can't just step through the logic you wrote because you didn't write it. You have to understand someone else's (the AI's) logic. That's still thinking hard, just differently.

What I miss more is the satisfaction of solving a tricky problem myself. Sometimes I deliberately don't use AI for stuff just to get that feeling back.

thorumtoday at 8:48 AM

> but the number of problems requiring deep creative solutions feels like it is diminishing rapidly.

If anything, we have more intractable problems needing deep creative solutions than ever before. People are dying as I write this. We’ve got mass displacement, poverty, polarization in politics. The education and healthcare systems are broken. Climate change marches on. Not to mention the social consequences of new technologies like AI (including the ones discussed in this post) that frankly no one knows what to do about.

The solution is indeed to work on bigger problems. If you can’t find any, look harder.

tietjenstoday at 7:56 AM

I wish the author would give some examples of what he wants to think hard about.

Bengaliloltoday at 10:08 AM

Cognitive debt lies ahead for all of us.

Besibetatoday at 3:55 AM

The problem with the "70% solution" is that it creates a massive amount of hidden technical debt. You aren't thinking hard because you aren't forced to understand the edge cases or the real origin of the problem. It used to be the case that you will need plan 10 steps ahead because refactoring was expensive, now people just focus in the next problem ahead, but the compounding AI slop will blow up eventually.

show 1 reply
woahtoday at 5:23 AM

Just work on more ambitious projects?

marcus_holmestoday at 7:43 AM

I think it's just another abstraction layer, and moves the thinking process from "how do I solve this problem in code?" to "how do I solve this problem in orchestration?".

I recently used the analogy of when compilers were invented. Old-school coders wrote machine code, and handled the intricacies of memory and storage and everything themselves. Then compilers took over, we all moved up an abstraction layer and started using high-level languages to code in. There was a generation of programmers who hated compilers because they wrote bad, inelegant, inefficient, programs. And for years they were right.

The hard problems now are "how can I get a set of non-deterministic, fault-prone, LLM agents to build this feature or product with as few errors as possible, with as little oversight as possible?". There's a few generic solutions, a few good approaches coming out, but plenty of scope for some hard thought in there. And a generic approach may not work for your specific project.

macmac_mactoday at 9:13 AM

reading this made me realize i used to actually think hard about bugs and design tradeoffs because i had no choice

capltoday at 8:21 AM

That’s funny cause I feel the opposite: LLMs can automate, in a sloppy fashion, building the first trivial draft. But what remains is still thinking hard about the non trivial parts.

conceptiontoday at 6:20 AM

“We now buy our bread… it comes sliced… and sure you can just go and make your sandwich and it won’t be a rustic, sourdough that you spent months cultivating. Your tomatoes will be store bought not grown heirlooms. In the end… you have lost the art of baking bread. And your sandwich making skills are lost to time… will humanity ever bake again with these mass factories of bread? What have we lost! Woe is me. Woe is me.”

show 1 reply
mw888today at 5:34 AM

Give the AI less responsibility but more work. Immediate inference is a great example: if the AI can finish my lines, my `if` bodies, my struct instantiations, type signatures, etc., it can reduce my second-by-second work significantly while taking little of my cognitive agency.

These are also tasks the AI can succeed at rather trivially.

Better completions is not as sexy, but in pretending agents are great engineers it's an amazing feature often glossed over.

Another example is automatic test generation or early correctness warnings. If the AI can suggest a basic test and I can add it with the push of a button - great. The length (and thus complexity) of tests can be configured conservatively relative to the AI of the day. Warnings can just be flags in the editors spotting obvious mistakes. Off-by-one errors for example, which might go unnoticed for a while, would be an achievable and valuable notice.

Also, automatic debugging and feeding the raw debugger log into an AI to parse seems promising, but I've done little of it.

...And go from there - if a well-crafted codebase and an advanced model using it as context can generate short functions well, then by all means - scale that up with discretion.

These problems around the AI coding tools are not at all special - it's a classic case of taking the new tool too far too fast.

show 1 reply
natetoday at 1:35 PM

author obviously isn't wrong. it's easy to fall into this trap. and it does take willpower to get out of it. and the AI (christ i'm going to sound like they paid me) can actually be a tool to get there.

i was working for months on an entity resolution system at work. i inherited the basic algo of it: Locality Sensitive Hashing. Basically breaking up a word into little chunks and comparing the chunk fingerprints to see which strings matched(ish). But it was slow, blew up memory constraints, and full of false negatives (didn't find matches).

of course i had claude seek through this looking to help me and it would find things. and would have solutions super fast to things that I couldn't immediately comprehend how it got there in its diff.

but here's a few things that helped me get on top of lazy mode. Basically, use Claude in slow mode. Not lazy mode:

1. everyone wants one shot solutions. but instead do the opposite. just focus on fixing one small step at a time. so you have time to grok what the frig just happened. 2. instead of asking claude for code immediately, ask for more architectural thoughts. not claude "plans". but choices. "claude, this sql model is slow. and grows out of our memory box. what options are on the table to fix this." and now go back and forth getting the pros and cons of the fixes. don't just ask "make this faster". Of course this is the slower way to work with Claude. But it will get you to a solution you more deeply understand and avoid the hallucinations where it decides "oh just add where 1!=1 to your sql and it will be super fast". 3. sign yourself up to explain what you just built. not just get through a code review. but now you are going to have a lunch and learn to teach others how these algorithms or code you just wrote work. you better believe you are going to force yourself to internalize the stuff claude came up with easily. i gave multiple presentations all over our company and to our acquirers how this complicated thing worked. I HAD TO UNDERSTAND. There's no way I could show up and be like "i have no idea why we wrote that algorithm that way". 4. get claude to teach it to you over and over and over again. if you spot a thing you don't really know yet, like what the hell is is this algorithm doing. make it show you in agonizingly slow detail how the concept works. didn't sink in, do it again. and again. ask it for the 5 year old explanation. yes, we have a super smart, over confident and naive engineer here, but we also have a teacher we can berate with questions who never tires of trying to teach us something, not matter how stupid we can be or sound.

Were there some lazy moments where I felt like I wasn't thinking. Yes. But using Claude in slow mode I've learned the space of entity resolution faster and more thoroughly than I could have without it and feel like I actually, personally invented here within it.

jurgenaut23today at 10:15 AM

Man, this resonates SO MUCH with me. I have always loved being confronted with a truly difficult problem. And I always had that (obviously misguided, but utterly motivating) feeling that, with enough effort, no problem could ever resist me. That it was just a matter of grinding a bit further, a bit longer.

This is why I am so deeply opposed to using AI for problem solving I suppose: it just doesn’t play nice with this process.

laroditoday at 6:10 AM

Well thinking hard is still there if you work on hard abstract problems. I keep thinking very hard, even though 4 CCs pump code while I do this. Besides, being a Gary Kasparov, playing on several tables, takes thinking.

sergiotapiatoday at 4:31 AM

With AI, I now think much harder. Timelines are shorter, big decisions are closer together, and more system interactions have to be "grokked" in my head to guide the model properly.

I'm more spent than before where I would spend 2 hours wrestling with tailwind classes, or testing API endpoints manually by typing json shapes myself.

saulpwtoday at 5:49 AM

The ziphead era of coding is over. I'll miss it too.

spacecadettoday at 12:02 PM

Every day man... Thinking hard on something is a conscious choice.

🔗 View 41 more comments