logoalt Hacker News

theshrike79last Wednesday at 1:09 AM1 replyview on HN

It's like working with humans:

  1) define problem
  2) split problem into small independently verifiable tasks
  3) implement tasks one by one, verify with tools
With humans 1) is the spec, 2) is the Jira or whatever tasks

With an LLM usually 1) is just a markdown file, 2) is a markdown checklist, Github issues (which Claude can use with the `gh` cli) and every loop of 3 gets a fresh context, maybe the spec from step 1 and the relevant task information from 2

I haven't ran into context issues in a LONG time, and if I have it's usually been either intentional (it's a problem where compacting wont' hurt) or an error on my part.


Replies

troupolast Wednesday at 9:00 AM

> every loop of 3 gets a fresh context, maybe the spec from step 1 and the relevant task information from 2

> I haven't ran into context issues in a LONG time

Because you've become the reverse centaur :) "a person who is serving as a squishy meat appendage for an uncaring machine." [1]

You are very aware of the exact issues I'm talking about, and have trained yourself to do all the mechanical dance moves to avoid them.

I do the same dances, that's why I'm pointing out that they are still necessary despite the claims of how model X/Y/Z are "next tier".

[1] https://doctorow.medium.com/https-pluralistic-net-2025-12-05...

show 1 reply