It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.
It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.
But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?
Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.