logoalt Hacker News

bornfreddyyesterday at 3:17 PM1 replyview on HN

This is actually where LLMs could be in advantage. Any code which is not clean (i.e. could be obfuscated) will trigger alarms and deeper inspection. It is much more difficult to create a good "underhanded" exploit that LLM will miss than it is to do the same for humans, imho.


Replies

whyeveryesterday at 10:31 PM

LLMs are vulnerable to prompt injection attacks, so I'm not sure they are in advantage.