GPT3's strength is on language generation, so using *GLUE for evaluating it (where encoder type models are just better) and claiming to have 99.9% less parameters is sensationalism.

Yes, it can do simple yes/no questions but can it write dad jokes like GPT-3?

For me another question is crucial: is it open (i.e. is there an open source reference implementation) or closed like the so-called "OpenAI" products/services?

Can you please explain to me (I don't understand it) why do you think GPT-3 is closed? Yes, they won't share the trained model, but they're sharing the research here[0][1] so you can reproduce easily, aren't they? As I understand it now, it's very fair - training the model is a separate thing from doing (and sharing) the research, is very costly, and would not happen if they were forced to open that too - I also don't understand why should they be.

[0] https://arxiv.org/abs/2005.14165

[1] https://github.com/openai/gpt-3