But memory bandwidth (bottleneck for LLM inference) is only marginally improved, 614 GB/s vs 546 GB/s for M4/M5 Max - where is this 4x improvement coming from?
I think I'll pass on upgrading.
It’s prompt processing so prefill - that’s compute bound not memory.
4x is on Time To First Token it's on the graph.
It’s prompt processing so prefill - that’s compute bound not memory.