But we don't have a linear address space, unless you're working with a tiny MCU. For last like 30 years we have virtual address space on every mainstream processor, and we can mix and match pages the way we want, insulate processes from one another, add sentinel pages at the ends of large structures to generate a fault, etc. We just structure process heaps as linear memory, but this is not a hard requirement, even on current hardware.
What we lack is the granularity that something like iAPX432 envisioned. Maybe some hardware breakthrough would allow for such granularity cheaply enough (like it allowed for signed pointers, for instance), so that smart compilers and OSes would offer even more protection without the expense of switching to kernel mode too often. I wonder what research exists in this field.
its entirely possible to implement segments on top of paging. what you need to do is add the kernel abstractions for implementing call gates that change segment visibility, and write some infrastructure to manage unions-of-a-bunch-of-little-regions. I haven't implemented this myself, but a friend did on a project we were working on together and as a mechanism it works perfectly well.
getting userspace to do the right thing without upending everything is what killed that project
> But we don't have a linear address space, unless you're working with a tiny MCU.
We actually do, albeit for a brief duration of time – upon a cold start of the system when the MCU is inactive yet, no address translation is performed, and the entire memory space is treated as a single linear, contiguous block (even if there are physical holes in it).
When a system is powered on, the CPU runs in the privileged mode to allow an operating system kernel to set up the MCU and activate it, which takes place early on in the boot sequence. But until then, virtual memory is not available.
This feels like a pointless form of pendantry.
Okay, so we went from linear address spaces to partioned/disaggregated linear address spaces. This is hardly the victory you claim it is, because page sizes are increasing and thus the minimum addressable block of memory keeps increasing. Within a page everything is linear as usual.
The reason why linear address spaces are everywhere has to do with the fact that they are extremely cost effective and fast to implement in hardware. You can do prefix matching to check if an address is pointing at a specific hardware device and you can use multiplexers to address memory. Addresses can easily be encoded inside a single std_ulogic_vector. It's also possible to implement a Network-on-Chip architecture for your on-chip interconnect. It also makes caching easier, since you can translate the address into a cache entry.
When you add a scan chain to your flip flops, you're implicitly ordering your flip flops and thereby building an implicit linear address space.
There is also the fact that databases with auto incrementing integers as their primary keys use a logical linear address space, so the most obvious way to obtain a non-linear address space would require you to use randomly generated IDs instead. It seems like a huge amount of effort would have to be spent to get away from the idea of linear address spaces.