Fifteen years after the initial release of FLAC – have there been any significant developments in the lossless compression of audio since then?

I know there’s FLIF[1] for lossless image compression and Zstandard[2] for general purpose lossless compression that have recently hit the Hacker News front page. Are their adopted techniques not suitable for audio?

[1] http://flif.info/

[2] https://code.facebook.com/posts/1658392934479273/smaller-and...

Let's see:

- Wavpack [1], which is a rough contemporary but offers three tiers of presets (normal scale, high scale, extra high scale) and an innovative (and optional) lossy/hybrid mode

- TAK [2] which compressed better and decoded faster than either, but was initially closed-source until the dev was persuaded to open it up

- LossyWAV [3] which isn't lossless but chops off least-significant-bits while using noise shaping to pre-process audio and make it compress better when fed to a lossless compressor

Most of these developments were first publicized on Hydrogenaudio. But as for innovations in the last two years, not that I'm aware.

[1] http://wiki.hydrogenaud.io/index.php?title=WavPack [2] http://wiki.hydrogenaud.io/index.php?title=TAK [3] http://wiki.hydrogenaud.io/index.php?title=LossyWAV

EDIT (for some more background): generally in lossless audio compression you want to use linear prediction to predict an approximate signal for the next few samples, then encode the difference between your predicted guess and the actual signal in some entropy coder, like Golomb-Rice codes or Huffman or Arithmetic coding. Although most of Zstandard's improvements are algorithmic or implementation-related and not related to data theory, the part that could show promise is the tANS entropy coder [4] used in Zstandard; but Golomb-Rice codes perform well for data that comes from linear predictors; so I'm not sure what to expect [5].

[4] https://github.com/Cyan4973/FiniteStateEntropy

[5] 'Benchmarks' section under [4]