logoalt Hacker News

chongliyesterday at 2:56 PM1 replyview on HN

That still doesn't seem ideal. Run the LLM itself in a kernel-enforced sandbox, lest it find ways to exploit vulnerabilities in its own code.


Replies

valleyeryesterday at 3:09 PM

The LLM inference itself doesn't "run code" per se (it's just doing tensor math), and besides, it runs on OpenAI's servers, not your machine.

show 1 reply