It's really amazing how DALL-E missed the boat. When it was launched, it was a truly amazing service that had no equal. In the months since then, both Midjourney and Stable Diffusion emerged and got to the point where they produce images of equal or better quality than DALL-E. And you didn't have to wait in a long waitlist in order to gain access! They effectively gave these tools free exposure by not allowing people to use DALL-E.

Furthermore, the pricing model is much worse for DALL-E than any of its competitors. DALL-E makes you think about how much money you're losing continuously - a truly awful choice for a creative tool! Imagine if you had to pay photoshop a cent every time you made a brushstroke. Midjourney has a much better scheme (and unlimited at only 30/month!), and, of course, Stable Diffusion is free.

This is a step in the right direction, but I feel that it is too little, too late. Just compare the rate of development. Midjourney has cranked out a number of different models, including an extremely exciting new model ("--testp"), new upscaling features, improved facial features, and a bunch more. They're also super responsive to their communtiy. In the meantime, OpenAI did... what? Outpainting? (And for months, DALL-E had an issue where clicking on any image on the homepage would instantly consume a token. How could it take so long to fix such a serious error?) You have this incredible tool everyone is so excited to use that they're producing hundred-page documents on how to get better results out of it, and somehow none of that actually makes it into the product?

Truly proves the saying, "Get Woke, Go Broke". All this pearl-clutching over safety really did a disservice to them.

In all fairness, their release of Whisper[0] last week is actually really amazing. Like CLIP, it has the ability to spawn a lot of further research and work thanks to the open source aspect of it. I hope OpenAI learns from this, downgrades the "safety" shills, and focuses on producing more high-quality open source work, both code and models, which will move the field forward.

[0]: https://github.com/openai/whisper