logoalt Hacker News

cozzydyesterday at 3:44 PM1 replyview on HN

I believe the issue is if an exploit is somehow injected into AI training data such that the AI unwittingly produces it and the human who requested the code doesn't even know.


Replies

vlovich123yesterday at 3:46 PM

That’s a separate issue and specifically not what OP was describing. Also highly unlikely in practice unless you use a random LLM - the major LLM providers already have to deal with such things and they have decent techniques to deal with this problem afaik.