logoalt Hacker News

bitwize10/12/20247 repliesview on HN

> Apparently we now think 64 cores is ‘lower core count’. What a world we live in.

64 cores is a high-end gaming rig. Civilization VII won't run smoothly on less than 16.


Replies

zamadatix10/12/2024

Civ 6 really doesn't utilize cores as much as one would think. I mean it'll spread the load across a lot of threads, sure, but it never seems to actually... use them much? E.g. I just ran the Gathering Storm expansion AI benchmark (late game map completely full of civs and units - basically worst case for CPU requirements and best case for eating up the multicore performance) on a 7950X 16 core CPU and it rarely peaked over 30% utilization, often averaging ~25%. 30% utilization means a 6 core part (barring frequency/cache differences) should be able to eat that at 80% load.

https://i.imgur.com/YlJFu4s.png

Whether the bottleneck is memory bandwidth (2x6000 MHz), unoptimized locking, small batch sizes, or something else it doesn't seem to be related to core count. It's also not waiting on the GPU much here, the 4090 is seeing even less utilization than the CPU. Hopefully utilization actually scales better with 7, not just splits up a lot.

show 1 reply
gkhartman10/12/2024

I can't help but think that this sounds more like a failure to optimize at the software level rather than a reasonable hardware limitation.

show 2 replies
csomar10/12/2024

If Civ 6 is any guidance, 64 or 32 won't make a slight difference. The next step calculations seem to run on a single CPU and thus having more CPUs is not going to change a thing. This is a software problem; they need to distribute the calculation over several CPUs.

noncoml10/12/2024

Civilization VII won't run smoothly.

Only recently I managed to build a PC that will run Civ 6 smoothly during late game on huge map

show 1 reply
snvzz10/12/2024

civ6's slowness is purely bad programming. No excuses to be had.

show 1 reply
treesciencebot10/12/2024

all high end "gaming" rigs are either using ~16 real cores or 8:24 performance/efficiency cores these days. threadripper/other HEDT options are not particularly good at gaming due to (relatively) lower clock speed / inter-CCD latencies.

fulafel10/12/2024

As the GPGPU scene trajectory seems dismal[1] for the foreseeable future wrt the DX, this seems like the best hope.

[1] Fragmentation, at best C++ dialects, no practical compiler tech to transparently GPU offload, etc