The LLM models give the most likely respond to a prompt. So if you prompt it with "find security bugs from this code" it will respond with "This may be a security bug" than you "you fucking donkey this curl code has already been eyeballed by hundreds of people, you think a statistic model will find something new?"