logoalt Hacker News

goku12yesterday at 9:42 AM3 repliesview on HN

> I made this very comment here on Hacker News probably 4-5 years ago and received a few down votes for it at the time

HN isn't always very rational about voting. It will be a loss if you judge any idea on their basis.

> It would take a lot of work to make a GPU do current CPU type tasks

In my opinion, that would be counterproductive. The advantage of GPUs is that they have a large number of very simple GPU cores. Instead, just do a few separate CPU cores on the same die, or on a separate die. Or you could even have a forest of GPU cores with a few CPU cores interspersed among them - sort of like how modern FPGAs have logic tiles, memory tiles and CPU tiles spread out on it. I doubt it would be called a GPU at that point.


Replies

zozbot234yesterday at 10:33 AM

GPU compute units are not that simple, the main difference with CPU is that they generally use a combination of wide SIMD and wide SMT to hide latency, as opposed to the power-intensive out-of-order processing used by CPU's. Performing tasks that can't take advantage of either SIMD or SMT on GPU compute units might be a bit wasteful.

Also you'd need to add extra hardware for various OS support functions (privilege levels, address space translation/MMU) that are currently missing from the GPU. But the idea is otherwise sound, you can think of the 'Mill' proposed CPU architecture as one variety of it.

Den_VRyesterday at 9:53 AM

As I recall, Gartner made the outrageous claim that upwards of 70% of all computing will be “AI” in some number of years - nearly the end of cpu workloads.

k4rnaj1kyesterday at 10:10 AM

[dead]