Wow! That's actually kind of disturbing.
LLMs have a real problem with not treating context differently from instructions. Because they intermingle the two they will always be vulnerable to this in some form.