> Because the attempts at segmented or object-oriented address spaces failed miserably.
> That is false. In the Intel World, we first had the iAPX 432, which was an object-capability design. To say it failed miserably is overselling its success by a good margin.
I would further posit that segmented and object-oriented address spaces have failed and will continue to fail for as long as we have a separation into two distinct classes of storage: ephemeral (DRAM) and persistent storage / backing store (disks, flash storage, etc.) as opposed to having a single, unified concept of nearly infinite (at least logically if not physically), always-on just memory where everything is – essentially – an object.
Intel's Optane has given us a brief glimpse into what such a future could look like but, alas, that particular version of the future has not panned out.
Linear address space makes perfect sense for size-constrained DRAM, and makes little to no sense for the backing store where a file system is instead entrusted with implementing an object-like address space (files, directories are the objects, and the file system is the address space).
Once a new, successful memory technology emerges, we might see a resurgence of the segmented or object-oriented address space models, but until then, it will remain a pipe dream.
I think seamless persistent storage is also bound to fail. There are significant differences on how we treat ephemeral objects in programs and persistent storage. Ephemeral objects are low value, if something goes wrong we can just restart and recover from storage. Persistent storage is often high value, we make significant effort to guarantee its consistency and durability even in the presence of crashes.
I shudder to think about the impact of concurrent data structures fsync'ing on every write because the programmer can't reason about whether the data is in memory where a handful of atomic fences/barriers are enough to reason about the correctness of the operations, or on disk where those operations simply do not exist.
Also linear regions make a ton of sense for disk, and not just for performance. WAL-based systems are the cornerstone of many databases and require the ability to reserve linear regions.
I don't see how any amount of memory technology can overcome the physical realities of locality. The closer you want the data to be to your processor, the less space you'll have to fit it. So there will always be a hierarchy where a smaller amount of data can have less latency, and there will always be an advantage to cramming as much data as you can at the top of the hierarchy.