It really puts our current definition of "latency" into a painful perspective.
We have a machine running on 1970s hardware, a light-day away, that arguably maintains a more reliable command-response loop relative to its constraints than many modern microservices sitting in the same availability zone.
It’s a testament to engineering when "performance" meant physics and strict resource budgeting, not just throwing more vCPUs at an unoptimized Python loop. If Voyager had been built with today's "move fast and break things" mindset, it would have bricked itself at the heliopause pending a firmware update that required a stronger handshake.
You're breezing past the labor cost quite deftly. I'm reasonably sure that developing the Voyager probes required a few more people and hours than your average microservice.
I am certain if I had the estimated $4,000,000,000 it took to get Voyager 1 launched, I could get some microservices to function regardless of all scenarios.
The reality is, its only worth it to build to 99.9999% uptime for very specific missions... There is no take-backsies in space. Your company will survive a microservice outage.
Spacecraft require more 9s of reliability than microservices. Their engineering processes are very different, even today. We still build new spacecraft today, even though we don’t launch them into interstellar space.
I mean entirely different use cases, right?
Borking a space mission vs someone’s breakfast status update can be optimized differently
It’s a testament to product planning. It has nothing to do with engineering.
If it’s Photoshop and formally verified and can’t crash but it has only 5 tools, I would be pissed.
If it’s a remote monitoring station with a cool GUI but crashes daily I would be pissed.
Know the product that you are building.