My experience with out of memory is that in every single language and environment I worked on in the past, once an application hits that condition, there is very little hope of continuing reliably.
So if your aim is to build a reliable system, it is much easier to plan to never get there in the first place.
Alternatively, make your application restart after OOM.
I would actually prefer the application to just stop immediately without unwinding anything. It makes it much clearer as to what possible states the application could have gotten itself to.
Hopefully you already have designed the application to move atomically between known states and have mechanism to handle any operation getting interrupted.
If you did it right, handling OOM by having the application drop dead should be viewed as just exercising these mechanisms.
Build perl with -DUSEMYMALLOC and -DPERL_EMERGENCY_SBRK, then you can preallocate a buffer by doing $^M="0" x 65536; then you can trap the out of memory condition with the normal facilities in language and handle it appropriately (mostly letting the big data get deallocated, or exiting). Then you can continue on just like normal. It's a weird setup and I don't think I've run into any other language with that built in.
Useful, but on Linux it's highly likely that by the time you're comparing $@ to ENOMEM the OOM killer has already awoken and is heading your way.
The interesting thing is that the OOM killer doesn’t always go for the program that triggered the OOM. It may also decide to kill another memory-hungry process (cough database) on the machine unless you explicitly tweaked it.
If a program allocates all available memory, and systemd then hits OOM on a 1kB allocation, do you think we should kill systemd?
On the other hand, if you are likely to be identified as the culprit, I think the best you can hope for is getting some cleanup/reporting in before you're kill-9'd.