There are an infinite amount of ways to jailbreak AI models. I don't understand why every time a new method is published it makes the news. The data plane and the control plane in LLM inputs are one in the same, meaning you can mitigate jailbreaks but you cannot 100% prevent them currently. It's like blacklisting XSS payloads and expecting that to protect your site.