logoalt Hacker News

kace91yesterday at 10:49 PM2 repliesview on HN

I really would like an answer to this.

My CTO is currently working on the ability to run several dockerised versions of the codebase in parallel for this kind of flow.

I’m here wondering how anyone could work on several tasks at once at a speed where they can read, review and iterate the output of one LLM in the time it takes for another LLM to spit an answer for a different task.

Like, are we just asking things as fast as possible and hoping for a good solution unchecked? Are others able to context switch on every prompt without a reduction in quality? Why are people tackling the problem of prompting at scale as if the bottleneck was token output rather than human reading and reasoning?

If this was a random vibecoding influencer I’d get it, but I see professionals trying this workflow and it makes me wonder what I’m missing.


Replies

c-linkageyesterday at 10:53 PM

I was going to say that this is how genetic algorithms work, but there is still too much human in the loop.

Maybe code husbandry?

show 1 reply
Aeolunyesterday at 10:54 PM

Hmm, I haven’t managed to make it work yet, and I’ve tried. The best I can manage is three completely separate projects, and they all get only divided attention (which is often good enough these days).

show 1 reply