There have been a large amount of innovation on fast compression and fast decompression in recent years. Traditional compression tools like gzip or xz are geared towards higher compression ratio, but memory compression tends to favor speed. Check out those algorithms:
* lz4: https://lz4.github.io/lz4/
* Google's snappy: https://github.com/google/snappy
* Facebook's zstd in fast mode: http://facebook.github.io/zstd/#benchmarks
couchdb seems to fit this requirement really well. You explicitly create indexes on the fields that you want to query. The index creation part is slow but once it's done your queries are really fast. All data is compressed with snappy[1].
couchdb also encourages you to split your data across multiple databases. Effectively you can have thousands of databases all managed by a single couchdb server instance. You can move your data in temporary databases and "purge" them when they are no longer required. It's all really cool once you get a hang of how to use this feature. Although if you query across databases you'll have to "join" the result set within your application.
You should give it a try, you'll really like it :)