It could have been this:
“The reason is that, in other fields [than software], people have to deal with the perversity of matter. [When] you are designing circuits or cars or chemicals, you have to face the fact that these physical substances will do what they do, not what they are supposed to do. We in software don't have that problem, and that makes it tremendously easier. We are designing a collection of idealized mathematical parts which have definitions. They do exactly what they are defined to do.
And so there are many problems we [programmers] don't have. For instance, if we put an ‘if’ statement inside of a ‘while’ statement, we don't have to worry about whether the ‘if’ statement can get enough power to run at the speed it's going to run. We don't have to worry about whether it will run at a speed that generates radio frequency interference and induces wrong values in some other parts of the data. We don't have to worry about whether it will loop at a speed that causes a resonance and eventually the ‘if’ statement will vibrate against the ‘while’ statement and one of them will crack. We don't have to worry that chemicals in the environment will get into the boundary between the if statement and the while statement and corrode them, and cause a bad connection. We don't have to worry that other chemicals will get on them and cause a short-circuit. We don't have to worry about whether the heat can be dissipated from this ‘if’ statement through the surrounding ‘while’ statement. We don't have to worry about whether the ‘while’ statement would cause so much voltage drop that the ‘if’ statement won't function correctly. When you look at the value of a variable you don't have to worry about whether you've referenced that variable so many times that you exceed the fan-out limit. You don't have to worry about how much capacitance there is in a certain variable and how much time it will take to store the value in it.
All these things are defined a way, the system is defined to function in a certain way, and it always does. The physical computer might malfunction, but that's not the program's fault. So, because of all these problems we don't have to deal with, our field is tremendously easier.”
— Richard Stallman, 2001: <https://www.gnu.org/philosophy/stallman-mec-india.html#conf9>
He makes a valid distinction, in a very specific sense. As long as we understand a program correctly, then we understand its behavior completely [0]. The same cannot be said of spherical cows (which, btw, can be modeled by computers, which means programs inherit the problems of the model, in some sense, and all programs model something).
However, that "as long as" is doing quite a bit of work. In practice, we rarely have a perfect grasp of a real world program. In practice, there is divergence between what we think a program does and what it actually does, gaps in our knowledge, and so on. Naturally, this problem also afflicts mathematical approximations of physical systems.
[0] And even this is not entirely true. Think of a concurrent program. Race conditions can produce all sorts of weird results that are unpredictable. Perfect knowledge of the program will not tell you what the result will be.
Rowhammer, cosmic bitflip or hardware or just compiler bugs come to mind.