Workloads emerge with higher capacity not other way around. Lossless media, to virtual reality applications all scale better with more available bandwidth.
An average AAA game is 100-200GB today. That is not by accident, The best residential internet of 1Gbps dedicated it is still 30 minutes of download, for the average buyer it is still few hours easily.
A 2TB today game is a 5 hour download on 1 Gbps connection and days for median buyer. Game developers can not think of a 2TB game if storage capacity, I/O performance, and bandwidth all do not support it.
Hypothetically If I could ship a 200TB game I would probably pre-render most of the graphics at much higher resolutions/frame-rates than compute it poorly on the GPU on the fly.
More fundamentally, we would lean towards less compute on client and more computed assets driven approach for applications. A good example of that in tech world in the last decade is how we have switched to using docker/container layers from just distributing source files or built packages. the typical docker images in the corporate world exceed 1GB, the source files being actually shipped are probably less than 10Mb of that. We are trading size for better control, Pre built packages instead of source was the same trade-off in 90s.
Depending on what is more scarce you optimize for it. Single threaded and even multi-threaded compute growth has been slowing down. Consumer internet bandwidth has no such physics limit that processors do so it is not a bad idea to optimize for pre-computed assets delivery rather than rely on client side compute.
And even at 1Gbps when I had it, the game servers couldn’t keep up.