Prompt injection is just setting rules in the same place and way other rules are set. The LLM doesn't know the rules being given are wrong, because they come through the same channel. One set of rules exhorts the LLM to ignore the other set - and vice versa. It's more akin to having two bosses than having customers and a boss.
This is not because LLMs make the same mistakes humans do, which (AFAICT anyway) was the gist of the argument to which I replied. LLMs are not humans. They are not sentient. They are not out-smarted by prompt injection attacks, or tricked, or intimidated, or bribed. One shouldn't excuse this vulnerability by claiming humans make the same mistakes.