The thing I would really like to figure out is how to prevent a Linux system from essentially livelocking when it close to runs out of memory. We've all seen it. Try to ssh in, connections get established but do not proceed. If you're lucky to have a console shell open from before, it shows gigantic load. Wish there was a way to put a few system critical processes into a container to guarantee them some resources.

I've heard that is problem is caused by Linux's overcommitting strategy. Basically, initial memory allocation never fails (unless you set special flags), but no memory is actually allocated on the spot. Memory is only allocated when it is accessed. And if Linux runs out of memory when a program accessed a piece of yet to be allocated memory, it will try really _really_ hard to free up memory so that memory access can success.

That's what's causing the lock ups.

Sounds to me this would be difficult to fix without breaking backward compatibility.

In the mean time, you can probably improve your quality of life quite a bit by using something like: https://github.com/facebookincubator/oomd