logoalt Hacker News

imiricyesterday at 9:37 PM4 repliesview on HN

I find it puzzling whenever someone claims to reach "flow" or "zen state" when using these tools. Reviewing and testing code, constantly switching contexts, juggling model contexts, coming up with prompt incantations to coax the model into the right direction, etc., is so mentally taxing and full of interruptions and micromanagement that it's practically impossible to achieve any sort of "flow" or "zen state".

This is in no way comparable to the "flow" state that programmers sometimes achieve, which is reached when the person has a clear mental model of the program, understands all relevant context and APIs, and is able to easily translate their thoughts and program requirements into functional code. The reason why interrupting someone in this state is so disruptive is because it can take quite a while to reach it again.

Working with LLMs is the complete opposite of this.


Replies

jwpapiyesterday at 10:59 PM

Thank you so much. These comments let me believe in my sanity in an over-hyped world.

I see how people think its more productive, but honestly I iterate on my code like 10-15 times before it goes into production, to make sure it logs the right things, it communicates intent clearly, the types are shared and defined where they should be. It’s stored in the right folder and so on.

Whilst the laziness to just pass it to CC is there I feel more productive writing it on my own, because I go in small iterations. Especially when I need to test stuff.

Let’s say I have to build an automated workflow and for step 1 alone I need to test error handling, max concurrency, set up idempotency, proper logging. Proper intent communication to my future self. Once I’m done I never have to worry about this specific code again (ok some error can be tricky to be fair), but often this function is just practically my thought and whenever i need it. This only works with good variable naming and also good spacing of a function. Nobody really talks about it, but if a very unimportant part takes a lot of space in a service it should be probably refactored into a smaller service.

The goal is to have a function that I probably never have to look again and if I have to do it answers me as fast as possible all the questions my future self would ask when he’s forgotten what decisions needed to be made or how the external parts are working. When it breaks I know what went wrong and when I run it in an orchestration I have the right amount of feedback.

As others I could go very long about that and I’m aware of the other side of the coin overengineering, but I just feel that having solid composable units is just actually enabling to later build features and functionality that might be moat.

Slow, flaky units aren’t less likely to become an asset..

And even if I let AI draft the initial flow, honestly the review will never be as good as the step by step stuff I built.

I have to say AI is great to improve you as a developer to double check you, to answer (broad questions), before it gets to detailed and you need to experiment or read docs. Helps to cover all the basics

show 1 reply
sefrostyesterday at 11:49 PM

I switched to use LLMs exclusively since around March last year and I haven’t wrote a line of code directly since then.

I have followed the usual autocomplete > VS Code sidebar copilot > Cursor > Claude Code > some orchestrator of multiple Codex/Claude Codes.

I haven’t experienced the flow state once in this new world of LLMs. To be honest it’s been so long that I can’t even remember what it felt like.

slashdavetoday at 12:13 AM

LLMs deal with implementation details that get in the way of "flow"

fragmedeyesterday at 11:31 PM

"My flow state is better than yours"? Point is, I get engaged with the thing and lose track of time.

show 1 reply