Seems like a sensible project for them since shaving 4% of YouTube's traffic translates to millions of dollars in savings. But I'm more excited about the possibility of using deep learning image models to get incredibly higher compression rates. Some of the work I've seen on de-noising and super-resolution suggests that we are barely scratching the surface on what might be possible in terms of high-def video compression. Of course there is something of a time vs. space tradeoff, since these techniques would require way more compute for both encoding and decoding. But compute is pretty cheap and underutilized on the client side now, and Google probably has a huge amount of excess compute power to handle usage spikes that could be used for background processing.

In procedurally generated Anime there is Waifu 2x https://github.com/nagadomi/waifu2x . After procedural generation it is recommended to do denoiseing on the resulting image to improve quality.