Tandem was interesting. They had a lot of good ideas, many unusual today.
* Databases reside on raw disks. There is no file system underneath the databases. If you want a flat file, it has to be in the database. Why? Because databases can be made with good reliability properties and made distributed and redundant.
* Processes can be moved from one machine to another. Much like the Xen hypervisor, which was a high point in that sort of thing.
* Hardware must have built in fault detection. Everything had ECC, parity, or duplication. It's OK to fail, but not make mistakes. IBM mainframes still have this, but few microprocessors do, even though the necessary transistors would not be a high cost today. (It's still hard to get ECC RAM on the desktop, even.)
* Most things are transactions. All persistent state is in the database. Think REST with CGI programs, but more efficient. That's what makes this work. A transaction either runs to successful completion, or fails and has no lasting effect. Database transactions roll back on failures.
The Tandem concept lived on through several changes of ownership and hardware. Unfortunately, it ended up at HP in the Itanium era, where it seems to have died off.
It's a good architecture. The back ends of banks still look much like that, because that's where the money is. But not many programmers think that way.
> Unfortunately, it ended up at HP in the Itanium era, where it seems to have died off.
My dad continues to maintain NonStop systems under the umbrella of DXC. (Which is a spinoff of HP? Or something? Idk the details.) He worked at Tandem back in the day, and has stayed with it ever since. I think he'd love to retire, but he never ends up as part of the layoffs that get sweet severance packages, because he's literally irreplaceable.
The whole stack got moved to run on top of Linux, IIRC, with all these features being emulated. It still exists though, for the handful of customers that use it.
Yes, IBM mainframes employ or have analogous concepts to all of this which may be one of many reasons they haven't disappeared. A lot of it was built up over time whereas Tandem started from the HA specification so the concepts and marketing are clearer.
Stratus was another interesting HA vendor, particularly the earlier VOS systems as their modern systems are a bit more pedestrian. http://www.teamfoster.com/stratus-computer
Not to take away from your main point: The only reason it is hard to get ECC in a desktop is because it is used as customer segmentation, not because it if technically hard or because it would drive the actual cost of the hardware up.
Speaking of Tandem Databases, HP had released the SQL engine behind SQL/MX[0] as open source (Trafodion) running in front of Hadoop to the Apache Software Foundation but it appears they have shutdown the project[1].
[0]: https://thenewstack.io/sql-hadoop-database-trafodion-bridges...
Oracle has had raw disk support for a long time. I'm pretty sure it's the last 'mainstream' database that does.
> Databases reside on raw disks. There is no file system underneath the databases.
The terminology of "filesystem" here is confusing. The original database system was/is called Enscribe, and was/is similar to VMS Record Management Services - it had different types of structured files types, in addition to unstructured unix/dos/windows stream-of-byte "flat" files. Around 1987 Tandem added NonStop SQL files. They're accessed through a PATH: Volume.SubVolume.Filename, but depending on the file type, there is different things you can do with them.
> If you want a flat file, it has to be in the database.
You could create unstructured files as well.
> Processes can be moved from one machine to another
Critical system processes are process-pairs, where a Primary process does the work, but sends checkpoint messages to a Backup process on another processor. If the Primary process fails, the Backup process transparently takes over and becomes the Primary. Any messages to the process-pair are automatically re-routed.
> Unfortunately, it ended up at HP in the Itanium era, where it seems to have died off.
It did get ported to Xeon processors around 10 years ago, and is still around. Unlike OpenVMS, HPE still works on it, but as I don't think there is even a link to it on the HPE website* . It still runs on (standard?) HPE x86 servers connected to HPE servers running Linux to provide storage/networking/etc. Apparently it also runs supported under VMWare of some kind.
* Something something Greenlake?