The real trick is to use a parallelized implementation of the compression tool, viz:
tar --use-compress-program /usr/bin/lbzip2 --create --file file.tar.bz2
or if you must use .gz tar --use-compress-program /usr/bin/pigz --create --file file.tar.gz
As a FYI pigz also can speed up decompression as well, but the performance boost is minimal. The decompression is still in a single thread but it spawns other threads for i/o and so forth. For compression the speedup is quite linear.
igzip (https://github.com/intel/isa-l) is much faster than gzip or pigz when it comes to decompression, 2-3x in my experience. There is also a Python module (isal) that provides a GzipFile-like wrapper class, for an easy speed-up of Python scripts that read gzipped files.
However, it only supports up to level 3 when compressing data, so it can't be used as a drop-in replacement for gzip. You also need to make sure to use the latest version if you are going to use it in the context of bioinformatics, since older versions choke on concatenated gzip files common in that field.