So, Tensil looks really cool. One of the constraints listed in the docs though is that it only supports convolutional networks at the moment.

What does the timeline look like for supporting some of the more popular transformer/attention based arch's look like?

We're working on our roadmap right now and prioritizing support based on user interest. If there's a particular model or set of models you're interested in accelerating, I'd love to hear about it!

If there's a lot of interest in transformers, we'd aim to offer support in the next couple of months.

A lot of SOTA models seem to be gravitating towards transformer based models. Obviously, I can't speak for the entire field, but you can just go take a look at the most popular HuggingFace repos and see what I mean. They started out focused on language, but because transformers have become so popular, they're expanding into the audio and vision domains quickly. Their library called 'transformers' is, outside of research, most peoples go to high level framework as it largely abstracts away a lot of the boilerplate that writing in pure TF, PyTorch, Jax requires.

See:

https://huggingface.co/spaces

https://github.com/huggingface/transformers