logoalt Hacker News

humanfromearth9yesterday at 3:09 PM1 replyview on HN

It's also trained on all best practices and algorithms that you don't know exist, so it is able to do better - provided you know to ask and how to ask/what to ask.


Replies

HideousKojimayesterday at 3:49 PM

It's not simply a matter of knowing what/how to ask. LLMs are essentially statistical regressions on crack. This is a gross oversimplification, but the point is that what they generate is based on statistical likelihoods, and if 90%+ of the code they were trained on was shit you're not going to get the good stuff very often. And if you need an AI to help you do it you won't even be able to recognize the good stuff when it does get generated.