What does HackerNews think of lora?

Using Low-rank adaptation to quickly fine-tune diffusion models.

Language: Jupyter Notebook

It has been already, for quite a while now (at least a month or so).

have a look at various twitter accounts that post AI images: e.g. https://twitter.com/PLAawesome/media

these have had progressively better and better hands over time. You can clearly see that they've been using various model merges (this Lora merging techniques like https://github.com/cloneofsimo/lora) to get two different models to combine and get the best of both. Many have done better hands and contributed it out. NSFW, but this is one i found that has very realistic hands now: https://civitai.com/models/2661

It is faster than i can keep up. This is open source collaboration at heart. I am very glad that Stable Diffusion was released publicly. Now if only openAI would do the same with their GPT models.

CN is a total game changer for generative image models, it solves so many things that were problematic before (proper depth, pose, sensible text and much more). This along with LoRa[0] and other improvements from the SD community really turn this into a super capable toolchain.

[0] https://github.com/cloneofsimo/lora

Yes, right now you have 3 options:

- dreambooth, ~15-20 minutes finetuning but generally generates high quality and diverse outputs if trained properly,

- textual inversion, you essentially find a new "word" in the embedding space that describes the object/person, this can generate good results, but generally less effective than dreambooth,

- LORA finetuning[1], similar to dreambooth, but you're essentially finetuning the weight deltas to achieve the look, faster than dreambooth, much smaller output.

1: https://github.com/cloneofsimo/lora