logoalt Hacker News

HideousKojimalast Monday at 3:49 PM0 repliesview on HN

It's not simply a matter of knowing what/how to ask. LLMs are essentially statistical regressions on crack. This is a gross oversimplification, but the point is that what they generate is based on statistical likelihoods, and if 90%+ of the code they were trained on was shit you're not going to get the good stuff very often. And if you need an AI to help you do it you won't even be able to recognize the good stuff when it does get generated.