What does HackerNews think of stable-dreamfusion?

A pytorch implementation of text-to-3D dreamfusion, powered by stable diffusion.

Language: Python

Dreamfusion -- text-to-3d seems like it might be useful here eventually: https://dreamfusion3d.github.io/ (once successfully opensourced, see https://github.com/ashawkey/stable-dreamfusion)

Rigging also looks like it could have a decent AI/DNN solution: https://arxiv.org/pdf/2005.00559.pdf

Magic3D looks like an improvement on DreamFusion [1], so it's sad to see that the code and models are not being made public.

What is public right now is StableDreamFusion [2]. It produces surprisingly good results on radially symmetrical organic objects like flowers and pineapples. You can run it on your own GPU or a colab.

Or, if you just want to type a prompt into a website and see something in 3D, try our demo at https://holovolo.tv

[1] https://dreamfusion3d.github.io/

[2] https://github.com/ashawkey/stable-dreamfusion

[3] https://holovolo.tv

What we really need is a MUD that is fed through StableDiffusion -> NeRF and outputs a 3D map for you to play in

https://github.com/ashawkey/stable-dreamfusion