logoalt Hacker News

or_am_itoday at 10:23 AM2 repliesview on HN

It's always easier to blame the model and convince yourself that you have some sort of talent in reviewing LLM's work that others don't.

In my experience the differences are mostly in how the code produced by LLM is prompted and what context is given to the agent. Developers who have experience delegating their work are more likely to prevent downstream problems from happening immediately and complain their colleagues cannot prompt as efficiently without a lot of hand holding. And those who rarely or never delegated their work are invariably going to miss crucial context details and rate the output they get lower.


Replies

loloquwowndueotoday at 10:32 AM

Never takes long for the “you’re holding it wrong” crowd to pop in.

show 1 reply
hellosimontoday at 11:42 AM

Partly true, but I think there's a real skill in catching subtle logic errors in generated code too not just prompting well. Both matter.