Is there any reason to believe that the first open source chat GPT clone won't consume most mindshare, a la stable diffusion?
on top of the hardware requirements ($10s of thousands of GPUs are needed for something a language model as big as gpt-3) there's also a lot of work involved in RLHF models like chatgpt. you need to pay people to write and review thousands/tens of thousands of responses for training. see 'methods' here: https://openai.com/blog/chatgpt/
LAION’s Open Assistant trying to address that with gamified crowdsourcing.