logoalt Hacker News

renehszlast Wednesday at 4:42 AM5 repliesview on HN

Strongly agree with this article. It highlights really well why overcommit is so harmful.

Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

But it feels like a lost cause these days...

So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.

What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.


Replies

charcircuityesterday at 4:02 AM

>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software

Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.

Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.

PunchyHamsteryesterday at 2:51 AM

It's not harmful. It's necessary for modern systems that are not "an ECU in a car"

> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.

The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either * need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency * over-allocate, and waste RAM

And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"

201984last Friday at 8:50 PM

> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.

mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.

barcharyesterday at 2:49 AM

Even besides the aforementioned fork problems not having overcommit doesn't mean you can handle oom correctly by just handling errors from malloc!

hparadizlast Friday at 8:36 PM

That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.