So just so I understand- this is all based on taking input from the user, injecting it in a template prompt that instructs chatgpt to answer the question based on providing it all the source material? What happened to building your own models to run offline?

That's like complaining that a musician doesn't build their own piano. Or actually, it's like asking why they don't build their own piano factory. No sole developer has the skills or resources to build something like GPT. Even if it was open source no user would be able to run it locally anyway.

No, you completely miss the point.

It's like complaining that a musician cannot practice their instrument without software that requires you to be always-online.

> Even if it was open source no user would be able to run it locally anyway.

You are just stating this, it does not make it true. Several of us are running GPT-3 workloads locally.

> Several of us are running GPT-3 workloads locally.

Unless you work for OpenAI (and your “local” machine somehow has 1TB+ of VRAM, equivalent to roughly 25 A100s), this cannot possibly be true…

1.7k forks… https://github.com/openai/gpt-3

apart from that, several pre trained corpuses had been around for a while