Relying on good will and people doing the right thing is clearly bullshit - any system which is insecure should be a legitimate target, and the onus needs to be on those who own the systems to secure them, and be unable to disclaim liability if they do not.
However, the law needs to reflect that if people are to actually take the suggestions seriously.
>his front door used an old style of insecure lock, so I spent 4 hours picking it. It’s his fault for not having a more secure lock.
Is there practical ways other than spending a couple billion dollars to protect yourself from nation state hacking groups? Especially if you'd doing something like internet connected medical devices? Honest question
> any system which is insecure should be a legitimate target, and the onus needs to be on those who own the systems to secure them, and be unable to disclaim liability if they do not
And what is the limit on that, because the only actually-secured system is one that is not connected to anything or accessed by anyone.
Look, I agree that people are shit and the only person you can trust is one you've killed yourself, but that's not really a workable solution.
Say I do everything right and still get compromised because an AWS 0-day lets attackers read the RAM of my virtual server. It’s my responsibility, but is it my fault?
There’s no such thing as a secure system that’s usable. You can asymptomatically approach it giving infinite money, in the same way you can approach physical security (“if it were really important to you, you would’ve cloned Fort Knox, so I guess you don’t care”) or even the speed of light. But even Fort Knox is vulnerable to a highly determined invading army.
Getting compromised doesn’t inherently mean you made mistakes.