logoalt Hacker News

dwaltripyesterday at 11:00 PM0 repliesview on HN

I feel this as well. I think it’s something to do with having to be more “on” as you slowly work with the LLM to define the problem and find a reasonable solution. There’s not much of a flow-state. You have to process mountains of output and identify the critical points, over and over, endlessly. And it will always be an off in this unsettling little way, even when it’s mostly quite good. It’s jarring.

The strange sorts of errors and reasoning issues LLMs have also require a vigilance that is very draining to maintain. Likewise with parsing the inhuman communication styles of these things…