This is precisely why they should really open source their model so that anyone can download and run it on their own infrastructure. Just like google or others have done and one is free to run it on their own laptop (some even without a GPU) , on premise or on any cloud provider infrastructure. They can continue to provide a hosted service for their model but they should allow it to be downloaded just like BERT.

Open AI's CEO Sam Altman's take is that they will only ever allow API access to their models to avoid misuse.

I don't get HN's take with wanting everything open sourced. Some things are expensive to create and dangerous in the wrong hands. Not everything can and should be open sourced.

Can you think of any non-weapons examples where centralization/gatekeeping of a tech meaningfully and causally benefited society or a technology itself?

Actually, thinking about my own question I'm even inclined to remove the non-weapons qualifier. The most knee jerk response, nuclear weapons, is perhaps the best example of unexpected benefit. The 'decentralization' of nuclear weapons is undoubtedly why the Cold War was the Cold War, and not World War 3. And similarly why we haven't* seen an open war between nations with nuclear weapons. One power to rule over all suddenly turned into "war with this country no longer has a win scenario" effectively ending open warfare between nuclear nations.

There's also the inevitability/optics argument. There are already viable open source alternatives [1], and should this tech ultimately prove viable/useful that will only be the beginning. So there certainly will be "ai" that will be open, it just won't come from OpenAI(tm)(c).

[1] - https://github.com/THUDM/GLM-130B