But aren’t the weights still not for commercial use?
That's what I thought too, the source code was not an issue so much as that.
What we need is some sort of "Large Language Model at Home" (like SETI@home was) that could crowdsource the creation of the model which would be free to use.
Right, so sort of like https://github.com/bigscience-workshop/petals but for the training phase. I suppose different training runs could be proposed via a RFC type of procedure. Then it’s not only the open source model maintainers that put the effort, but also supporters of the project can “donate” their hardware resources.