logoalt Hacker News

Terrettatoday at 12:58 PM1 replyview on HN

Even at unlimited budget, there is a crossover where outsourcing thinking to the machine costs more than the machine.

What I mean by this:

1. Intern, analyst, junior, or offshore level coding is cheaper when done by the machine.

// Side note: There is good reason the industry invests in suboptimal output from this set which moves to the "cost" column when using an LLM, but nobody's accounting for that.

2. For the interns, analysts, junior, or offshoring to do the right thing costs a multiple of the coding effort: the PdM/PjM stuff of course, but also the Stakeholder, Product Owner, Architect, Principal Engineer, QA, and SRE stuff.

3. If you are not a principal or staff engineer level engineer, you are likely unqualified to catch and fix the errors LLMs make across engineering, much less these other PDLC (product development lifecycle, which includes SDLC and SRE) loop.

4. For LLM output to be useful, your 'harness' has to incorporate all of that as well, which because it's so much harder than transliterating spec-to-code, balloons tokens exponentially.

5. Today it is faster, more efficient, and costs less, to work with LLMs "XP" (eXtreme Programming) style, pairing with the LLM actively co-creating and co-reviewing, steering for more effective turns.

So, your options are:

- ship garbage while costing less than a median first world SWE

- pair with the LLM actively for the benefits of XP

- add enough harness and steering the LLM costs more than SWEs, and still needs a human loop “move fast and break things to find out what's broken” style

I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.


Replies

Grosvenortoday at 2:11 PM

> I would expect that within a couple years, these other disciplines can be baked in enough the machine costs less for everything but surprises.

They already are. I’m successfully using frameworks like bmad to deliver complex apps at that level. My job is to manager the see, as, ux, sre processes and catch errors.

I spend more time refinding prd , epics and stories than I do elbows deep in code.

If I don’t like the output of a story I nuke it change the story and have the flanker try again. I’m using the open source glm, kimi, deepseek models. I expect the full pipeline to be good enough by the end of the year.