logoalt Hacker News

ndriscollyesterday at 2:47 PM1 replyview on HN

Why would it matter if the model is trained with malicious intent? It's a pure function. The harness controls security policies.


Replies

coppsilgoldyesterday at 6:32 PM

Much like a developer can insert a backdoor as a "bug" so can an LLM that was trained to do it.

One way you could probably do it is by identifying a commonly used library that can be misused in a way that would allow some kind of time-of-check to time-of-use (TOCTOU) exploit. Then you train the LLM to use the library incorrectly in this way.