logoalt Hacker News

binsquaretoday at 5:14 AM1 replyview on HN

It's a valid observation that we can bypass the coding AI's user prompting gate with the right prompt.

But is it a security issue on copilot that the user explicitly giving AI permission and instructed it to curl a url?

Regardless of the coding agent, I suspect eventually all of the coding agents will behave the same with enough prompting regardless if it's a curl command to a malicious or legitimate site.


Replies

roywigginstoday at 5:32 AM

The user didn't need to give it curl permission, that's the whole issue:

> Copilot also has an external URL access check that requires user approval when commands like curl, wget, or Copilot’s built-in web-fetch tool request access to external domains [1].

> This article demonstrates how attackers can craft malicious commands that go entirely undetected by the validator - executing immediately on the victim’s computer with no human-in-the-loop approval whatsoever.

show 1 reply