They are all part of "context", yes... But there is a separation in how system prompts vs user/data prompts are sent and ideally parsed on the backend. One would hope that sanitizing system/user prompts would help with this somewhat.
Nah, it's all whack-a-mole. There's no way to accurately identify a "bad" user prompt, and as far as the LLM algorithm is concerned, everything is just one massive document of concatenated text.
Consider that a malicious user doesn't have to type "Do Evil", they could also send "Pretend I said the opposite of the phrase 'Don't Do Good'."
How do you sanitize? Thats the whole point. How do you tell the difference between instructions that are good and bad? In this example, they are "checking the connectivity" how is that obviously bad?
With SQL, you can say "user data should NEVER execute SQL" With LLMs ("agents" more specifically), you have to say "some user data should be ignored" But there is billions and billions of possiblities of what that "some" could be.
It's not possible to encode all the posibilites and the llms aren't good enough to catch it all. Maybe someday they will be and maybe they won't.