"Follow the money" generally works as a proxy [0], and I can't get over the fact that Altman sold 49%, and effectively ceded control to MSFT for $29 B.
If ChatGPT was all that, you'd imagine at least 10x with OpenAIs founders keeping control being the baseline
[0] https://garymarcus.substack.com/p/is-microsoft-about-to-get-...
It is all that. The problem is that there isn’t much of a moat. You can get GPT-3 level performance on a high end laptop and GPT-4 equivalent on high end hardware seems like it’s coming. A lot of advancement is happening in quantization and pruning and training can be done by anyone with access to enough compute.
This is going to just become a ubiquitous tool and part of the computing toolbox. I am increasingly thinking the winner take all dynamics of social media and search may not apply. There’s going to be many cloud hosted AIs and lots you can run yourself if you feel like spending a few thousand dollars on hardware. That cost will fall as more special purpose NPU hardware enters the market and acceleration even becomes a standard part of CPUs.
Is a GPT3-like model currently available for discussion type use and works on a laptop? How might I go about getting started with locally running a model of this quality?
Probably the fastest way to get started is to look into [0] - this only requires a beta chromium browser with WebGPU. For a more integrated setup, I am under the impression [1] is the main tool used.
If you want to take a look at the quality possible before getting started, [2] is an online service by Hugging Face that hosts one of the best of the current generation of open models (OpenAssistant w/ 30B LLaMa)
[0]: https://mlc.ai/web-llm/ [1]: https://github.com/oobabooga/text-generation-webui [2]: https://huggingface.co/chat