This para caught my eye
>Frontier cyber models may push states and defense firms toward the opposite logic: security by obscurity, with closed software, closed tooling, closed firmware, and closed chips. If a model cannot train on the code and architecture of a target stack, it will usually have less context and less speed. That does not make systems safe, but it does raise the value of proprietary stacks all the way down to hardware.
Is this really true. Are there any experts who can weigh in on this.
Should we interpret this to mean that in the new world Windows is more resistant to attacks than say Linux.
I think there’s some credence to the concept that more context == faster iteration cycles. Source code can be one major source of context.
I think “security through obscurity is no security” concept was aimed toward people not relying on obscurity alone as a security mechanism. And largely that message succeeded. But now we are in a rapid acceleration of capabilities (on both sides) where any advantage to one side will result in outsized gains, at least in the short term.
> Should we interpret this to mean that in the new world Windows is more resistant to attacks than say Linux.
LLMs can read assembly better than most, so probably not. But reality has never stopped people from trying to obfuscate.
In general: less data = less "intelligence".
And basically all the security bugs I've read about were find looking on the source code.
But it doesn't mean windows is more secure, just image a scenario where someone is stealing windows source code and sell it to rogue actor, it will make it even less secure because no one (expect windows) would have had the chance to search for bugs in the source code.