At the center of every security situation is the question, "is the effort worth the reward?"
We prepare security measures based on the perceived effort a bad actor would need to defeat that method, along with considering the harm of the measure being defeated. We don't build Fort Knox for candy bars, it was built for gold bars.
These model advances change the equation. The effort and cost to defeat a measure goes down by an order of magnitude or more.
Things nobody would have considered to reasonably attempt are becoming possible. However. We have 2000-2020s security measures in place that will not survive the AI models of 2026+. The investment to resecure things will be massive, and won't come soon enough.