At work, we switched over from TensorFlow to PyTorch when 1.0 was released, both for R&D and production... and our productivity and happiness with PyTorch noticeably, significantly improved.

Back when we were using TensorFlow, whenever we wanted to try something new that wasn't already provided out-of-the-box by existing APIs, sooner or later we would find ourselves wrestling with its machinery, especially for models with more complex control flow.

TensorFlow feels like it was built from the ground up to scale up to billions of users and all kinds of devices, with developer productivity and happiness a secondary priority. PyTorch feels like it was built the other way around, prioritizing developer productivity and happiness; other considerations were secondary.

That said, we are keeping an eye on Swift + MLIR + TensorFlow. We think it could unseat PyTorch for R&D and eventually, production, due to (a) the promise of automatic creation of high-performance GPU/TPU kernels without hassle, (b) Swift's easy learning curve, and (c) Swift's fast performance and type safety. Jeremy Howard has a good post about this: https://www.fast.ai/2019/03/06/fastai-swift/

If you want type safety on top of Pytorch, checkout this project https://github.com/NVIDIA/NeMo