This comment I'm making is mostly useless nitpicking, and I overall agree with your point. Now I will commence my nitpicking:
I suspect that it may merely be infeasible, not strictly impossible. There has been work on automatically proving that an ANN satisfies certain properties (iirc e.g. some kinds of robustness to some kinds of adversarial inputs, for handling images).
It might be possible (though infeasible) to have an effective LLM along with a proof that e.g. it won't do anything irreversible when interacting with the operating system (given some formal specification of how the operating system behaves).
But, yeah, in practice I think you are correct.
It makes more sense to put the LLM+harness in an environment which ensures you can undo whatever it does if it messes things up, than to try to make the LLM be such that it certainly won't produce outputs that would mess things up in a way that isn't easily revertible, even if it does turn out that the latter is in principle possible.