logoalt Hacker News

vibeprofessortoday at 5:55 AM2 repliesview on HN

I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.

AI moves engineering into higher-level thinking much like compilers did to Assembly programming back in the day


Replies

Nextgridtoday at 6:19 AM

> hard problems are best solved by breaking them down into smaller, easier sub-problems

I'm ok doing that with a junior developer because they will learn from it and one day become my peer. LLMs don't learn from individual interactions, so I don't benefit from wasting my time attempting to teach an LLM.

> much like compilers did for Assembly programming back in the day

The difference is that programming in let's say C (vs assembler) or Python vs C saves me time. Arguing with my agent in English about which Python to write often takes more time than just writing the Python myself in my experience.

I still use LLMs to ask high-level questions, sanity-check ideas, write some repetitive code (in this enum, convert all camelCase names to snake_case) or the one-off hacky script which I won't commit and thus the quality bar is lower (does this run and solve my very specific problem right now?). But I'm not convinced by agents yet.

show 1 reply
petesergeanttoday at 11:11 AM

> I find hard problems are best solved by breaking them down into smaller, easier sub-problems. In other words, it comes down to thinking hard about which questions to ask.

That's surely me solving the problem, not the agent?

show 1 reply