Is this unexpected? I feel like we're heading towards more hardware implementations of Machine Learning to work around the current bottlenecks.

There are extremely large amounts of hardware based implementations. Any more complicated TV upscaler (Sony 4K TVs certainly) probably has a neural network embedded in it.

Any algorithms working with images need FPGA implementations to be quick enough, and there are a lot of convnets in use there.

There's a chapter on convnets here: http://www.cambridge.org/us/academic/subjects/computer-scien...

RNNs are nothing special in particular, at least these with small number of layers and nodes.

HMMs or CRFs are easier to handle for sequences, and would probably work well, and there are FPGAs all over the place of these models.

Any links to TV upscaler chips using NN algorithms? I couldn't find any.

Also, my impression was that FPGAs are much slower than GPUs for neural nets. Unless you're talking about really high end chips like Stratix 10 from Altera, which cost over $30k. Power consumption is a different matter though.

https://community.sony.co.uk/t5/blog-news-from-sony/inside-4...

They won't tell you it's NNs but it is. Sony distributes a lot of movies, they use their database of movies to train the upscaling models (which is obviously NNs https://github.com/nagadomi/waifu2x ) and then put the chip in the TV.

It's something almost equivalent to storing Pride and Prejudice and Zombies in your TV in 4K, and then reproducing it when they match it with what's playing on TV.