logoalt Hacker News

turquoisevaryesterday at 12:25 AM1 replyview on HN

Come on, there’s no way you wrote that down unironically and didn’t struggle breathing through the strong chemical copium smells.

> goes into barring the competition from accessing the current nodes at TSMC

I know it’s en vogue to hate on Apple and make them out to be this big evil corporation, but you’re naming it sound as if they’ve been jerking off while sitting on TSMC’s capacity just to fuck with the competition and purely to make it impossible to compete, when in reality they’ve continued to make exponential improvements on their silicon platform.

> making Apple look good on benchmarks for 12-18 months or so

What are you on about? They’ve essentially been in a league of their own since the M1, especially if you take into consideration the power envelope and how performance is available with just passive cooling.

There isn’t really anything like it.

Even the salty argument of Apple hogging TMSC nodes just crumbles apart if you give more than a second of thought.

For starters, yes, sure Apple is great at managing their logistics and supply chain, which is why, when Cook was in charge of that, it impressed Jobs so much and it proved to be so essential to Apple’s success, that Jobs decided to hand pick Cook as his successor. I don’t see how that is a useful argument against Apple, moral or otherwise.

Nothing is stopping competitors from optimizing their process to the point where they can call TSMC and offer to buy their capacity for the next year or two. To say nothing of the efforts made outside of TSMC like Samsung GAAFET 3nm and MBCFET 2nm process and whatever Intel is dicking around with on their 2nm process.

More importantly though, it’s silly to make it seem as if that’s the only reason for the fruits of Apple’s labor.

Take AMD’s HX 370 for example, released last year, courtesy of TSMC’s N4P process. It still struggled to provide a PPA similar to the M1 Pro, which wasn’t only 3 years older at the time, it was a product of TSMC’s older N5 process.

Clearly having access to newer TSMC nodes isn’t a guaranteed win.

> and couldn't care less about performance

You’ve got it mixed up. Apple has never cared about raw specs, but they always have and always will care about performance.

If you’re inclined to read their every move through the big bad filter then you might say they never cared about raw performance because they’ve always been able to get more out of less and this way they could charge high spec prices without the high spec cost (and without, historically, advertising specs), and it clearly worked out for them.

Their stuff is being sold as if it’s given away for free, in doing so they’ve proven that the average user couldn’t give two fucks about bigger numbers as long as it works well, and their competitors have to pack their phones and other devices with higher specs and cooling solutions like vapor chambers (something Apple has managed to avoid so far) to keep up.

In a way they’ve always had to care more about performance than their competitors because they’ve mostly worked with hardware that’s “lesser” on paper to maximize their margins.

> to offer, a sense of novelty, excitement, taste

I don’t know about you but single-handedly making x86_64 look like an ancient joke with something that would’ve been considered a silly mobile processor 10 years ago is quite novel and exiting. If nothing else it lit a fire under Intel, even if they’ve seemed to have decided to let themselves be turned into a well done steak.

This was essentially what Intel had in mind with their Atom series for netbooks back in the day and Intel never managed to crack the code.

I remember being amazed when I received my developer transition kit, running macOS on an A12Z like it was nothing.

Even now, if I want to be more comfortable and do some coding or video editing work on the couch I can use my off-the-shelve base model M3 MacBook Air to do most of what I can on my M1 Max, that’s quite the leap in performance in such a short time.

There’s no accounting for taste or course and what I like might not be to your liking, and there is plenty about Apple that deserve legitimate criticism, so I don’t understand the need to make something out of nothing in this instance.


Replies

ezstyesterday at 2:03 PM

> Come on, there’s no way you wrote that down unironically and didn’t struggle breathing through the strong chemical copium smells.

What a way to out yourself as some kind of irrational zealot.

>> goes into barring the competition from accessing the current nodes at TSMC

> I know it’s en vogue to hate on Apple […] when in reality they’ve continued to make exponential improvements on their silicon platform.

Have they? M3 to M4 is roughly 20% more perfs for 10% higher TDP.

> What are you on about? They’ve essentially been in a league of their own since the M1

Are they?

> Take AMD’s HX 370 for example,

Indeed, AMD is *very* close perfs-wise, while sitting on TSMC's 4nm, versus the new M4 Pro's 3mn Gen2.

https://www.cpubenchmark.net/compare/6143vs6346/AMD-Ryzen-AI...

> It still struggled to provide a PPA similar to the M1 Pro

…and not far-off when talking energy efficiency, again, with a whole gen of difference

https://www.notebookcheck.net/AMD-Zen-5-Strix-Point-CPU-anal...

so, within single digit precents.

I'm not taking away from Apple's push towards ARM, that was ballsy, and well executed (also, they had little choice but to ditch Intel, and with AMD not being an option it's pretty obvious in retrospect). That said, I'm tired of the rhetoric and attitude that somehow Apple's chips are made of angel dust or something, especially on this "tech"/"science" forum.

> You’ve got it mixed up. Apple has never cared about raw specs, but they always have and always will care about performance.

Apple a decade and a half ago was selling you "unique" products or clever features. Today's Apple announcements is Tim showing you benchmarks.

> I don’t know about you but single-handedly making x86_64 look like an ancient joke

No, they haven't. They did put intel to shame, but so did AMD, and that came as a surprise to nobody.