logoalt Hacker News

senderistatoday at 2:42 AM0 repliesview on HN

I don’t buy that LLMs won’t make off-by-one or memory safety errors, or that they won’t introduce undefined behavior. Not only can they not reason about such issues, but imagine how much buggy code they’re trained on!