logoalt Hacker News

OakNinjalast Thursday at 8:34 PM1 replyview on HN

Yes. But the exploitable vector in this case is still humans. AI is just a tool.

The non-deterministic nature of an LLM can also be used to catch a lot of attacks. I often use LLM’s to look through code, libraries etc for security issues, vulnerabilities and other issues as a second pair of eyes.

With that said, I agree with you. Anything can be exploited and LLM’s are no exception.


Replies

cyanydeezlast Thursday at 9:28 PM

As long as a human has control over a system AI can drive, it will be as exploitable as the human.

Sure this is the same as positing P/=NP but the confidence that a language model will somehow become a secure determinative system fundamentally lacks language comprehension skills.