logoalt Hacker News

catlifeonmarstoday at 8:48 AM1 replyview on HN

Is that the case?


Replies

happosaitoday at 9:29 AM

The LLM models give the most likely respond to a prompt. So if you prompt it with "find security bugs from this code" it will respond with "This may be a security bug" than you "you fucking donkey this curl code has already been eyeballed by hundreds of people, you think a statistic model will find something new?"