Sort of - there's some distinction in the "multi-chip" vs "multi-chiplet". To some degree though, it's true in the sense of: "A rose by any other name would still smell as sweet".
The explanation by 'kurthr provides some insight as to why we don't use the term MCM for them today, because the term would imply that they're not integrated as tightly as they are today.
The best example of MCM's are probably Intel’s Pentium D and Core 2 Quad. In these older "MCM" designs, generally the multiple chips were mostly fully working chips - they each had their own last-level (e.g. L3) cache (LLC). They also happened to be manufactured on the same lithography node. When a core on Die A needed data that a core on Die B was working on, Die B had to send the data off the CPU entirely, down the motherboard's Front Side Bus (FSB), into the system RAM, and then Die A had to retrieve it from the RAM.
IBM POWER4 and POWER5 MCM's did share L3 cache though.
So parent was 'wrong' that "chiplets" were ever called MCM's. But right that "chips designed with multiple chiplet-looking-things" did used to be called "MCM's".
Today's 'chiplets' term implies that the pieces aren't fully-functioning by themselves, they're more like individual "organs". Functionality like I/O, memory controllers, and LLC are split off and manufactured on separate wafers/nodes. In the case of memory controllers that might be a bit confusing because back in the days of MCM's these were not in the same silicon, rather a separate chip entirely on the motherboard, but I digress.
Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.
> So parent was 'wrong' that "chiplets" were ever called MCM's. But right that "chips designed with multiple chiplet-looking-things" did used to be called "MCM's".
No, chiplets were called MCMs. IBM and others as you noted had chip(lets) in MCMs that were not "fully-functioning" by themselves.
> Also MCM's lacked the kind of high-bandwidth, low-latency fabric for CPU's to communicate more directly with each other. For the Pentiums, that was organic substrates (the usual green PCB material) and routing copper traces between the dies. For the IBM's, that was an advanced ceramic-glass substrate, which had much higher bandwidth than PCB traces but still required a lot of space to route all the copper traces (latency taking a hit) and generated a lot of heat. Today we use silicon for those interconnects, which gives exemplary bandwidth+latency+heat performance.
This all just smells like revisionist history to make the name be consistent with previous naming.
IBM's MCMs had incredibly high bandwidth low latency interconnects. Core<->L3 is much more important and latency critical than core+cache cluster <-> memory or other core+cache cluster, for example. And IBM and others had silicon interposers, TSVs, and other very advanced packaging and interconnection technology decades ago too, e.g.,
https://indico.cern.ch/event/209454/contributions/415011/att...
The real story is much simpler. MCM did not have a great name particularly in consumer space as CPUs and memory controllers and things consolidated to one die which was (at the time) the superior solution. Then reticle limit, yield equations, etc., conspired to turn the tables and it has more recently come to be that multi chip is superior (for some things), so some bright spark probably from a marketing department decided to call them chiplets instead of MCMs. That's about it.
Aside, funnily enough IBM actually used to (and may still), and quite possibly others, actually call various cookie cutter blocks in a chip (e.g., a cluster of cores and caches, or a memory controller block, or a PCIe block), chiplets. From https://www.redbooks.ibm.com/redpapers/pdfs/redp5102.pdf, "The most amount of energy can be saved when a whole POWER8 chiplet enters the winkle mode. In this mode, the entire chiplet is turned off, including the L3".