> Pytorch’s interface is objectively much better than Tensorflow’s

Ummm. No. 'Objectively' is utter nonsense. For an objective view we would need to define "better" first and measure both interfaces performance. I think it is preference. I prefer the Tensorflow interface and don't mind it's declarative style.

However, if one wants to criticize something one could start with the static nature of Tensorflow (which you rightfully mentioned), which makes it hard to do stuff like LSTMs and dynamic batching. That works better in Torch. To me that is the only real attack point for Tensorflow. But keep in mind, that Tensorflow is still "1.x" software and stuff like Tf-Fold addresses the problem with static graphs already and Google plans for more dynamic graphs in Tf 2.0.

Also it would have been nice of you to measured performance of the both frameworks on common problems. But looking at the interface and shouting "bad" at Tensorflow is not really critique but a personal dissatisfaction with Tensorflow.

I think that Tf currently aims more at production code than research stuff. The mentioned problem with being to low-level for simple stuff like layers is also not right. Have a look at the shipped contrib modules, you'll find common layers in there.

In my experience (computer vision, deep learning) PyTorch is substantially faster as well, especially in data augmentation where it’s not just a thin layer over cudnn.

That said, you’re right. There’s no way I’d deploy it to production.

Have you looked at ONNX? It is a neural network exchange format that, in particular, lets you deploy PyTorch models in production via Caffe2. Here is a tutorial: http://pytorch.org/docs/master/onnx.html

Disclaimer: I work on Caffe2 team (not on ONNX, though)

If I want to export my own computational graph to ONNX, what is the first place I should look at? Do you know about any documentation or reference implementation of the format?

Reference specification and validator are hosted in https://github.com/onnx/onnx