Of course I grant the above point that software systems are not "ethereal". They have a physical form: often as electromagnetic states in matter.
I hope my contribution to the conversation was clear: the nature of the physical embodiment matters. Things that are easy and cheap to copy are harder to contain. Containment of intelligent systems is very difficult, based on what we know about human flaws and security systems.
>> When software can relocate its source code and data in seconds or less, containment strategies begin to look increasingly bleak.
> Hard? sure. Impossible? No
I don't need to claim that containment is impossible, just really hard. I'm interested in planning across a range of scenarios. Given human foibles, we should plan for scenarios where AIs get out of the box and can spread rapidly and widely.
See also: "Guidelines for Artificial Intelligence Containment" at https://arxiv.org/pdf/1707.08476
Imagine two possible AI containment failure scenarios. In the first, say it is feasible to "shut everything down" by disconnecting power. In the second, say humans have to resort to a combination of kinetic force and cyberattacks. For both, a likely next step would resemble the global health to eradicate smallpox. It would require tremendous effort and cost to bring the system back online while proving that no copies of the rogue AI exist.
Would such coordinated responses by humanity be possible? Possible, yes. Likely? Estimating this is hard. I, for one, am not optimistic when it comes to international cooperation generally, much less for ones involving high complexity.