logoalt Hacker News

SkyPuncherlast Sunday at 4:42 PM2 repliesview on HN

> The last 20%, while possible to attain, is ultimately not worth it for the amount of time you spend in context hells. You can just do it yourself faster.

I'm arguing that there's a skill that has to be learned in order to break through this. As you start in a new code base, you should be quick to jump in when you hit that 20%. But, as you spend more time in it, you learn how to avoid the same "context hell" issues and move that number down to 15%, 10%, 5% of the time.

You're still going to need to jump in, but when you can learn to get the LLM to write 95% of the code for you, that's incredibly powerful.


Replies

deadbabelast Sunday at 5:51 PM

It’s not incredibly powerful, it’s incrementally powerful. Getting the first 80% via LLM is already the incredible power. A sufficiently skilled developer should be able to handle the rest with ease. It is not worth doing anything unnatural in an effort to chase down the last 20%, you are just wasting time and atrophying skills. If you can get full 95% in some one shot prompts, great. But don’t go chasing waterfallls.

show 1 reply
lelanthranlast Monday at 7:02 AM

> 'm arguing that there's a skill that has to be learned in order to break through this. As you start in a new code base, you should be quick to jump in when you hit that 20%. But, as you spend more time in it, you learn how to avoid the same "context hell" issues and move that number down to 15%, 10%, 5% of the time.

The problem is that you're learning a skill that will need refinement each time you switch to a new model. You will redo some of this learning on each new model you use.

This actually might not be a problem anyway, as all the models seem to be converging asymptotically towards "programming".

The better they do on the programming benchmarks, the further away from AGI they get.