What does HackerNews think of lora?
Using Low-rank adaptation to quickly fine-tune diffusion models.
have a look at various twitter accounts that post AI images: e.g. https://twitter.com/PLAawesome/media
these have had progressively better and better hands over time. You can clearly see that they've been using various model merges (this Lora merging techniques like https://github.com/cloneofsimo/lora) to get two different models to combine and get the best of both. Many have done better hands and contributed it out. NSFW, but this is one i found that has very realistic hands now: https://civitai.com/models/2661
It is faster than i can keep up. This is open source collaboration at heart. I am very glad that Stable Diffusion was released publicly. Now if only openAI would do the same with their GPT models.
- dreambooth, ~15-20 minutes finetuning but generally generates high quality and diverse outputs if trained properly,
- textual inversion, you essentially find a new "word" in the embedding space that describes the object/person, this can generate good results, but generally less effective than dreambooth,
- LORA finetuning[1], similar to dreambooth, but you're essentially finetuning the weight deltas to achieve the look, faster than dreambooth, much smaller output.