Couldn't that be solved by whitelisting specific commands?
Give it a try, and challenge yourself (or ChatGPT) to break it.
You'll quickly realize that this is not feasible.
Such a mechanism would need to be implemented at `execve`, because it would be too easy for the model to stuff the command inside a script or other executable.
Give it a try, and challenge yourself (or ChatGPT) to break it.
You'll quickly realize that this is not feasible.