logoalt Hacker News

InsideOutSantalast Thursday at 7:50 PM4 repliesview on HN

While they have found some solvable issues (e.g. "the defense system fails to identify separate sub-commands when they are chained using a redirect operator"), the main issue is unsolvable. If you allow an LLM to edit your code and also give it access to untrusted data (like the Internet), you have a security problem.


Replies

derektanklast Thursday at 8:01 PM

A problem yes, but I think GP is correct in comparing the problem to that of human workers. The solution there has historically been RBAC and risk management. I don’t see any conceptual difference between a human and an automated system on this front

show 5 replies
iLoveOncalllast Thursday at 9:08 PM

> If you allow an LLM to edit your code and also give it access to untrusted data (like the Internet), you have a security problem.

You don't even need to give it access to Internet to have issues. The training data is untrusted.

It's a guarantee that bad actors are spreading compromised code to infect the training data of future models.

mistrial9last Thursday at 9:18 PM

no, you have a trust problem. Is the tool assisting, or is are the tools the architect, builder, manager, court and bank?

acessoproibidolast Thursday at 8:04 PM

>If you allow a human to edit your code and also give them access to untrusted data (like the Internet), you have a security problem.

Security shouldn't be viewed in absolutes (either you are secure or you aren') but more in degrees. Llms can be used securely just the same as everything else, nothing is ever perfectly secure

show 1 reply