logoalt Hacker News

jrumbuttoday at 4:20 AM1 replyview on HN

I agree with this sentiment but I think LLMs are really close to the Brooks idea of a silver bullet.

I don't know if, overall, it's a 10x improvement or 6x or 14x but it's a serious contender. Part of it is the LLMs are very uneven in their performance across domains. If all I build is simple landing pages, it might be a 100x improvement. If I work on more complex, proprietary work where there aren't great examples in the training data then it might be a 10% improvement (it helps me write better comments or something)


Replies

BlackFlytoday at 5:24 AM

All available evidence points to it being an incremental improvement at best. Higher claims are attributable to the psychological effect of the AI sycophancy problem which erases the Dunning-Kruger effect and makes even experts extremely overconfident.

You still have to read the output of your LLM. Learning by reading alone and not doing is not nearly as effective.