Req'd reading: "OpenAI Trains Language Model, Mass Hysteria Ensues" by CMU Professor Zachary Lipton, http://approximatelycorrect.com/2019/02/17/openai-trains-lan...

A salient quote from the article:

> However, what makes OpenAI’s decision puzzling is that it seems to presume that OpenAI is somehow special—that their technology is somehow different than what everyone else in the entire NLP community is doing—otherwise, what is achieved by withholding it? However, from reading the paper, it appears that this work is straight down the middle of the mainstream NLP research. To be clear, it is good work and could likely be published, but it is precisely the sort of science-as-usual step forward that you would expect to see in a month or two, from any of tens of equally strong NLP labs.

Suffice to say, most of what comes out of OpenAI is vanilla type work that I haven't seen go beyond academic research labs. They do spend a lot of time on PR and making stuff look pretty, I guess.

To be fair they've also released some useful tools, such as the AI Gym.

To be fair, all of their useful tools have been deprecated or in 'maintenance mode'. See https://github.com/openai/gym