What does HackerNews think of stable-diffusion-tensorflow?

Stable Diffusion in TensorFlow / Keras

Language: Python

On intel MacBookPro 2020, CPU-only, the original one[1] using pytorch utilized one core only. A tensorflow implementation[2] with oneDNN support which utilized most of the cores ran at ~11sec/iteration. Another OpenVINO based implementation[3] ran at ~6.0sec/iteration.

[1] https://github.com/CompVis/stable-diffusion/

[2] https://github.com/divamgupta/stable-diffusion-tensorflow/

[3] https://github.com/bes-dev/stable_diffusion.openvino/

https://mobile.twitter.com/divamgupta/status/157123450432020...

https://github.com/divamgupta/stable-diffusion-tensorflow

Now they are working together. That may be “telling” to you but I’m not sure why that should cast a negative light on Keras, really.

Tried to get this running on my 2080ti (11GB VRAM) but hitting OOM issues. So while performance seems better (but can't actually test this myself), I'm unable to actually verify it as it doesn't run. Some of the Pytorch forks works on as little as 6GB of VRAM (or maybe even 4GB?), but always good to have implementations that optimize for various factors, this one seems to trade memory usage for raw generation speed.

Edit: there seems to be a more "full" version of the same work available here, made by one of the authors of the submission article: https://github.com/divamgupta/stable-diffusion-tensorflow