logoalt Hacker News

lysaceyesterday at 8:45 PM2 repliesview on HN

I mean, I know that much. The numbers still don't make sense to me. How is my internal model this wrong?

For one, if this was about inference, wouldn't the bottleneck be the GPU computation part?


Replies

ssl-3yesterday at 9:08 PM

Concurrency?

Suppose some some parallelized, distributed task requires 700GB of memory (I don't know if it does or does not) per node to accomplish, and that speed is a concern.

A singular pile of memory that is 700GB is insufficient not because it lacks capacity, but instead because it lacks scalability. That pile is only enough for 1 node.

If more nodes were added to increase speed but they all used that same single 700GB pile, then RAM bandwidth (and latency) gets in the way.

Chiron1991yesterday at 9:57 PM

This "memory shortage" is not about AI companies needing main memory (which you plug into mainboards), but manufacturers are shifting their production capacities to other types of memory that will go onto GPUs. That brings supply for other memory products down, increasing their market price.