That is the traditional textbook yield curve logic, if I'm not wrong? Smaller area = higher probability of a surviving die on a dirty wafer. But I wonder if the sheer margin on AI silicon basically breaks that rule? If Nvidia can sell a reticle-sized package for 25k-30k USD, they might be perfectly happy paying for a wafer that only yields 30-40% good dies.
Apple OTOH operates at consumer electronics price points. They need mature yields (>90%) to make the unit economics of an iPhone work. There's also the binning factor I am curious about. Nvidia can disable 10% of the cores on a defective GPU and sell it as a lower SKU. Does Apple have that same flexibility with a mobile SoC where the thermal or power envelope is so tightly coupled to the battery size?
I thought they binned CPUs for things like AppleTV and lower cost iPads?
With current AI pricing for silicon, I think the math’s gone out the window.
For Apple, they have binning flexibility, with Pro/Max/Ultra, all the way down to iPads - and that’s after the node yields have been improved via the gazillion iPhone SoC dies.
NVIDIAs flexibility came from using some of those binned dies for GeForce cards, but the VRAM situation is clearly making that less important, as they’re cutting some of those SKUs for being too vram heavy relative to MSRP.
I am curious about the binning factor too since in the past, AMD and Intel have both made use of defect binning to still sell usable chips by disabling cores. Perhaps Apple is able to do the same with their SoCs? It's not likely to be as granular as Nvidia who can disable much smaller areas of the silicon for each of their cores. On the other hand, the specifics of the silicon and the layout of the individual cores, not to mention the spread of defects over the die might mitigate that advantage.