From the article:

"Of course, you need a sufficiently large model to be able to learn from all this data, which is why GPT-3 is 175 billion parameters and probably cost between $1m-10m in compute cost to train.[2]"

So, perhaps better title would be "GPT in 60 Lines of Numpy (and $1m-$10m)"

And it will be even more expensive to train it again on larger amounts of data and with a model with 10 times more parameters.

Only Big Tech giants like Microsoft, Google, etc can afford to foot the bill and throw away millions into training LLMs, whilst we celebrate and hype about ChatGPT and LLMs getting bigger and significantly more expensive to train when they get confused, hallucinate over silly inputs and confidently generate bullshit.

That can't be a good thing. OpenAI's ClosedAI model needs to be disrupted like how Stable Diffusion challenged DALLE-2 with an open source AI model.

I disagree, I run a small tech company that has a group that's been experimenting with stable diffusion and we noticed that an extreme version of the Pareto Principle applies here as well where you can get ~90% of the benefits for like 5% of the cost, combined with the fact that computing power is continuously getting cheaper.

Based on that groups success, they've recently proposed a mini project inspired by GPT that I am considering funding; the data its trained on is all publicly available for free, and most it comes from Common Crawl. I suspect that it will also yield similar results, where you can tailor your own version of GPT and get reasonably good models for a fraction of the price as well. We're no where close to the scale of Big Tech giants, but I've noticed for the better part of 15 years that small companies can actually derive a great deal of the benefits that larger companies have for a fraction of the cost if they play it smart and keep things tight.

Do you think it is possible for the AI to request information to fill in gaps in it's model?

For example, the AI doesn't have enough information about a companies process, or a regulation. It chats with an expert to fill in the gaps.

I have no understanding of AI

Yes, check out LangChain [0]. It enables you to wire together LLMs with other knowledge sources or even other LLMs. For example, you can use it to hook GPT-3 up to WolframAlpha. I’m sure you could pretty easily add a way for it to communicate with a human expert, too.

[0]: https://github.com/hwchase17/langchain