What does HackerNews think of draw-a-ui?
Draw a mockup and generate html for it
„This works by just taking the current canvas SVG, converting it to a PNG, and sending that png to gpt-4-vision with instructions to return a single html file with tailwind.“
The point of this demo is to experiment with new ways of interacting with an LLM. I'm very tired of typing into text boxes, when a quick scribble, or "back-of-the-envelope" drawing would communicate my thoughts better.
It worked a lot better than I expected! If you give it a try, let me know how it goes for you! And please feel free to check out the source code: https://github.com/tldraw/draw-a-ui/
Happy to answer any questions about tldraw/this project. It's definitely not putting anyone out of work, but it's a blast to play with. Here's a more complicated example of what you can get it to do: https://twitter.com/tldraw/status/1725083976392437894
I've added a note next to the input with more info here, but basically: the vision API is so new that its immediately rate limited on any site like this, and because OpenAI doesn't have a way of authorizing a site to use their own API keys (they should!), this was the best we could do. We don't store the API key or send it to our own servers, it just goes to OpenAI via a fetch request.
Putting an API key into a random text input is obviously a bad idea and I hope this doesn't normalize that. However, you can read the source code (https://github.com/tldraw/draw-a-ui) and come to your own conclusions—or else just run it locally instead.