> your memory requirements go from, say, 64GB to 512GB.
Then your memory requirements always were potentially 512GB. It may just happen to be even with that amount of allocation you may only need 64GB of actual physical storage; however, there is clearly a path for your application to suddenly require 512GB of storage. Perhaps when it's under an attack or facing misconfigured clients.
If your failure strategy is "just let the server fall over under pressure" then this might be fine for you.
> Then your memory requirements always were potentially 512GB. It may just happen to be even with that amount of allocation you may only need 64GB of actual physical storage; however, there is clearly a path for your application to suddenly require 512GB of storage.
If an allocator unconditionally maps in 512GB at once to minimize expensive reallocations, that doesn't inherently have any relationship to the maximum that could actually be used in the program.
Or suppose a generic library uses buffers that are ten times bigger than the maximum message supported by your application. Your program would deterministically never access 90% of the memory pages the library allocated.
> If your failure strategy is "just let the server fall over under pressure" then this might be fine for you.
The question is, what do you intend to happen when there is memory pressure?
If you start denying allocations, even if your program is designed to deal with that, so many others aren't that your system is likely to crash, or worse, take a trip down rarely-exercised code paths into the land of eldritch bugs.