logoalt Hacker News

smallmancontrovlast Saturday at 4:14 PM2 repliesview on HN

Yes!!!

20 years ago, it was extremely obvious to anyone who had to write forward/backward compatible parallelism that the-thing-nvidia-calls-SIMT was the correct approach. I thought CPU hardware manufacturers and language/compiler writers were so excessively stubborn that it would take them a decade to catch up. I was wrong. 20 years on, they still refuse to copy what works.

They search every corner of the earth for a clue, from the sulfur vents at the bottom of the ocean to tallest mountains, all very impressive as feats of exploration -- but they are still suffering for want of a clue when clue city is right there next to them, bustling with happy successful inhabitants, and they refuse to look at it. Look, guys, I'm glad you gave a chance to alternatives, sometimes they just need a bit of love to bloom, but you gave them that love, they didn't bloom, and it's time to move on. Do what works and spend your creative energy on a different problem, of which there are plenty.


Replies

janwasyesterday at 11:31 AM

If SIMT is so obviously the right path, why have just about all GPU vendors and standards reinvented SIMD, calling it subgroups (Vulkan), __shfl_sync (CUDA), work group/sub-group (OpenCL), wave intrinsics (HLSL), I think also simdgroup (Metal)?

vlovich123last Saturday at 4:36 PM

Because SIMT is not a general programming framework like CPUs are. It’s a technique for a dedicated accelerator for a specific kind of problem. SIMD on the other hand lets you get meaningful speed up inline with traditional code.

show 1 reply