Awesome article Ken, I feel spoiled! It's always nice to see your posts hit HN!
Out of curiosity: Is there anything you feel they could have done better in hindsight? Useless instructions, or inefficient ones, or "missing" ones? Either down at the transistor level, or in high level design/philosophy (the segment/offset mechanism creating 20 bit addresses out of 2 16-bit registers with thousands of overlaps sure comes to mind - if not a flat model, but that's asking too much to 1979 design and transistor limitations I guess) ?
Thanks!
That's an interesting question. Keep in mind that the 8086 was built as a stopgap processor to sell until Intel's iAPX 432 "micro-mainframe" processor was completed. Moreover, the 8086 was designed to be assembly-language compatible with the 8080 (through translation software) so it could take advantage of existing software. It was also designed to be compatible with the 8080's 16-bit addressing while supporting more memory.
Given those constraints, the design of the 8086 makes sense. In hindsight, though, considering that the x86 architecture has lasted for decades, there are a lot of things that could have been done differently. For example, the instruction encoding is a mess and didn't have an easy path for extending the instruction set. Trapping on invalid instructions would have been a good idea. The BCD instructions are not useful nowadays. Treating a register as two overlapping 8-bit registers (AL, AH) makes register renaming difficult in an out-of-order execution system. A flat address space would have been much nicer than segmented memory, as you mention. The concept of I/O operations vs memory operations was inherited from the Datapoint 2200; memory-mapped I/O would have been better. Overall, a more RISC-like architecture would have been good.
I can't really fault the 8086 designers for their decisions, since they made sense at the time. But if you could go back in a time machine, one could certainly give them a lot of advice!