This is _markedly_ faster than the PyTorch versions I've seen (nothing against the library, just categorizing the implementations). It would be nice to see this including the little quality of life additional models (eye fixes, upscaling, etc.), but I suspect the optimizations are transferrable.

Either way, getting 3 images for 25 iterations under 10 seconds (quick Colab test, which is where I've taken to comparing these things) is just ridiculously faster.

Which GPU did you test on Colab? Are you comparing with one of the fp16 PyTorch versions? Their test shows little improvement on V100.

PyTorch is now quite a bit more popular than Keras in research-type code (except when it comes from Google) so I don't know if these enhancements will get ported. This port was done by people working on Keras which is kind of telling - there isn't a lot of outside interest.

Should we expect people not working on keras to have the interest and ability to get it to work on keras?

If these people have existing Keras code they want to integrate or they are interested in developing it further in Keras, then it shouldn't require any insider knowledge to create a Keras version of a small but popular open-source project like this. I am very sure we'd get a PyTorch version made by outsiders quickly if Stable Diffusion was originally released in Keras/TF.

What is your definition of outsider?

We got a Keras version made by Divam Gupta very quickly after Stable Diffusion was released.

Is he not an outsider?

From what I can tell this Keras version was just released (the date on the post is Sep. 25) and the first author listed is the creator of Keras. Is this incorrect? I am not familiar with Divam Gupta and I would consider outsiders to be people not paid by Google.

https://mobile.twitter.com/divamgupta/status/157123450432020...

https://github.com/divamgupta/stable-diffusion-tensorflow

Now they are working together. That may be “telling” to you but I’m not sure why that should cast a negative light on Keras, really.