What does HackerNews think of ControlNet?

Let us control diffusion models!

Language: Python

Midjourney provides much better images by default. It's really impressive.

Stable Diffusion's advantage is in the huge amount of open source activity around it. Most recently that resulted in ControlNet, which is far more powerful than anything Midjourney can currently do - if you know how to use it.

https://github.com/lllyasviel/ControlNet

https://github.com/lllyasviel/ControlNet

Controlnet is a neural network added to an already trained model so they can be conditioned on new stuff like canny edge, depth map, segmentation map. Controlnet let you train this model on the new condition "easily", without catastrophic forgetting and without a huge dataset. In the repo linked by OP, they have trained a controlnet model on the segmentation map generated by SAM: https://segment-anything.com/

ControlNet maybe? https://github.com/lllyasviel/ControlNet

At huggingface: https://huggingface.co/spaces/hysts/ControlNet

Play around with the different models, you might get better results with some vs others.

I mentioned it in another response as well, but here's the original implementation repo for anyone interested:

https://github.com/lllyasviel/ControlNet

Stable diffusion + ControlNet is fire! Nothing compares to it. ControlNet allows you to have tight control over the output. https://github.com/lllyasviel/ControlNet
Note that this article is from four months ago. Their Patreon and Kickstarter have been suspended but they collected money on their website and they will first focus on making yet another anime model.

The big news in the Stable Diffusion land has been the release of ControlNet: https://github.com/lllyasviel/ControlNet. It hasn't gotten much traction on HN: https://news.ycombinator.com/item?id=34761780. It allows you to maintain the shape of existing images (or sketch new ones) and use Stable Diffusion to fill them out, my example: https://twitter.com/LechMazur/status/1626668677473918981.