logoalt Hacker News

tom_yesterday at 9:13 PM2 repliesview on HN

The 286 looks like it ought to be usefully quicker in general? Motorola did a good job on the programming model, but you can tell that the 68000 is from the 1970s. Nearly all the 68000 instructions take like 8+ cycles, and addressing modes can cost extra. On the 286, on the other hand, pretty much everything is like 2-4 cycles, or maybe 5-7 if there's a memory operand. (The manual seems to imply that every addressing mode has the same cost, which feels a bit surprising to me, but maybe it's true.) 286 ordinary call/ret round trip time is also shorter, as are conditional branches and stack push/pop.


Replies

adrian_btoday at 10:37 AM

The timings given in the datasheet of 286 are very optimistic and they can almost never be encountered in a real program.

They assume that instructions have been fetched concurrently without ever causing a stall and that memory accesses are implemented with 0 wait states.

In reality, instruction fetching was frequently a bottleneck and implementing a memory with 0 wait states for 80286 was much more difficult than for MC68000 or MC68010.

With the available DRAM, normally both 80286 and 80386 would have needed a cache memory. Later, after the launch of 80386DX, cache memories became common on 386DX MBs, but I have not seen any 80286 motherboard with cache memory.

They might have existed at an earlier time when 286 was the highest end, but by the time of the coexistence with 386 the 286 became the cheap option, so its motherboards never had cache memory, thus the memory accesses always had wait states, increasing the probability of instruction fetch bottlenecks and resulting in significantly more clock cycles per instruction than in the datasheet.

raphlinusyesterday at 9:28 PM

My reading is that there aren't really a lot of addressing modes on 286, as there are on 68000 and friends, rather every address is generated by summing an optional immediate 8 or 16 bit value and from zero to two registers. There aren't modes where you do one memory fetch, then use that as the base address for a second fetch, which is arguably a vaguely RISC flavored choice. There is a one cycle penalty for summing 3 elements ("based indexed mode").

show 1 reply