logoalt Hacker News

felixfurtak11/04/20250 repliesview on HN

GPUs are massively parallel, sure, but they still have a terrible memory architecture and are difficult to program (and are still massively memory constrained). It's only NVidia's development in cuda that made it even feasible to create decent ML models on GPUs.