Could in-memory compression used to bring down the RAM requirements?

There are some performance compressors like Blosch tuned for this:

https://www.blosc.org/pages/blosc-in-depth/

“Faster than memcpy” is the slogan.

MacOS has transparent memory compression. Unclear to me if that's made its way to iPhone, but if it hasn't yet it will sooner or later.

Memory compression? I can't find any good resources to read about it, any hints? I'm having trouble imaging how could it possibly work without totally destroying performance.

It doesn't destroy performance for the simple reason that nowadays memory access is slower than pure compute. If you need to use compute to produce some data to be stored in memory, your overall throughput could very well be faster than without compression.

There have been a large amount of innovation on fast compression and fast decompression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:

* lz4: https://lz4.github.io/lz4/

* Google's snappy: https://github.com/google/snappy

* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks