How can you disagree with my first point? You can't use skills if you don't have a Bash environment in which to run them. Do you disagree?
Skills with an API exposed by the service usually means your coding agent can access the credentials for that service. This means that if you are hit by a prompt injection the attacker can steal those credentials.
Fair points, learned something new.
Something like Cloudflare's Code Mode fixes both of these! No privileged bash environment, no VM necessary, no exposing credentials to the LLM.
As the article states, LLMs are fantastic at writing code, and not so good at issuing tool calls.