logoalt Hacker News

SpicyLemonZesttoday at 12:22 AM1 replyview on HN

Who knows whether permissions would prevent this? Anthropic's documentation on permissions (https://code.claude.com/docs/en/permissions) does not describe how permissions are enforced; a slightly uncharitable reading of "How permissions interact with sandboxing" suggests that they are not really enforced and any prompt injection can circumvent them.


Replies

jatoratoday at 1:04 AM

With hooks you can enforce permissions much more concretely.

show 1 reply