Since Loras are additive, is it possible to use them to do distributed retraining on a model, or even train an entire model bit by bit?

Like a torrent network but for training. That would be cool. The only question is how do you merge changes made by nodes (clients) across the network?

Clients could be incentivised to train as they are with crypto, but instead of mining, it's model training and in return they get "coin". Like making crypto mining useful.