Fascinating paper.

We design an inference accelerator which more or less accomplishes this by quantizing input tensors into logarithmic space. This allows the multiplication (in convolution especially), to be optimized into very simple adders. This (and a few other tricks) has a very dramatic impact on how much compute density we achieve while keeping power very low. We keep the tensors in our quantized space throughout the layers of the network and convert the outputs as required on the way out of the ASIC.

We achieve impressive task level performance, but this requires some specialized training and model optimizations.

Very cool to see ideas like this propagate more into the mainstream.

Isn't matrix multiplication already a convolution? You are rotating the right hand side matrix anti clockwise 90 degrees and then convolving it upon the LHS matrix from top to bottom.

no, it is not, and i am not

discrete convolution is cₙ = Σᵢaᵢbₙ

there is no way in which the indexes into the input matrices in a matrix multiplication are formed from sums or differences of indices and dummy variables

however, convolution is a matrix multiplication, specifically multiplication by the circulant matrix of the convolution kernel

hth, hand

Sure it doesn't sum the whole matrix but it does sum row by row. Also how did you type out LaTeX in HN? Or is that a font?

it sums products, but convolution is summing products in a particular way that is not general matrix multipication

i typed special characters with the compose key; cf. https://github.com/kragen/xcompose

not as easy as latex but more compatible