Perhaps they're more functional. Hooks are configured in the same settings file, which makes me pretty skeptical in the absence of explicit confirmation that they represent a stronger security boundary. (But of course, this is a fundamental challenge with LLM agent security - if you're using a well-aligned model that doesn't want to be prompt injected, how do you go about auditing something like this?)
Perhaps they're more functional. Hooks are configured in the same settings file, which makes me pretty skeptical in the absence of explicit confirmation that they represent a stronger security boundary. (But of course, this is a fundamental challenge with LLM agent security - if you're using a well-aligned model that doesn't want to be prompt injected, how do you go about auditing something like this?)