What does HackerNews think of InvokeAI?
This version of Stable Diffusion features a slick WebGUI, an interactive command-line script that combines text2img and img2img functionality in a "dream bot" style interface, and multiple features and other enhancements. For more info, see the website link below.
https://github.com/invoke-ai/InvokeAI
Edit: Spec required from the documentation
You will need one of the following:
An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6-8 GB of VRAM is highly recommended for rendering using the Stable Diffusion XL models
An Apple computer with an M1 chip.
An AMD-based graphics card with 4GB or more VRAM memory (Linux only), 6-8 GB for XL rendering.
https://huggingface.co/ (finding models and downloading them)
https://github.com/ggerganov/llama.cpp (llama)
https://github.com/cmp-nct/ggllm.cpp (falcon)
For interactive work (art/chat/research/playing around), things like:
https://github.com/oobabooga/text-generation-webui/blob/main... (llama) (Also - they just added a decent chat server built into llama.cpp the project)
https://github.com/invoke-ai/InvokeAI (stable-diffusion)
Plus a bunch of hacked together scripts.
Some example models (I'm linking to quantized versions that someone else has made, but the tooling is in the above repos to create them from the published fp16 models)
https://huggingface.co/TheBloke/llama-65B-GGML
https://huggingface.co/TheBloke/falcon-40b-instruct-GPTQ
https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
etc. Hugging face has quite a number, although some require filling out forms for the base models for tuning/training.
Our Repo: https://github.com/invoke-ai/InvokeAI
You will need one of the following:
An NVIDIA-based graphics card with 4 GB or more VRAM memory.
An Apple computer with an M1 chip.
Installation Instructions: https://invoke-ai.github.io/InvokeAI/installation/Download the model from Huggingface, add it through our Model Mgmt UI, and then start prompting.
Discord: https://discord.gg/invokeai-the-stable-diffusion-toolkit-102...
Also, will plug we're actively looking for people who want to contribute to our project! Hope you enjoy using the tool.
InvokeAI 2.2 is now available to everyone - It's an open source toolkit for Stable Diffusion. This update brings in exciting features, like UI Outpainting, Embedding Management and more. See highlighted updates below, or the full release notes for everything included in the release.
- The Unified Canvas: The Web UI now features a fully fitted infinite canvas that is capable of outpainting, inpainting, img2img and txt2img so you can streamline and extend your creative workflow. The canvas was rewritten to improve performance greatly and bring support for a variety of features like Paint Brushing, Unlimited History, Real-Time Progress displays and more.
- Embedding Management: Easily pull from the top embeddings on Huggingface directly within Invoke, using the embed token to generate the exact style you want. With the ability to use multiple embeds simultaneously, you can easily import and explore different styles within the same session!
- Viewer: The Web UI now also features a Viewer that lets you inspect your invocations in greater detail. No more opening the images in your external file explorer, even with large upscaled images!
- 1 Click Installer Launch: With our official 1-click installation launch, using our tool has never been easier. Our OS specific bundles (Mac M1/M2, Windows, and Linux) will get everything set up for you. Our source installer is available now, and our binary installer will be available in the next day or two. Click and get going - It’s now much simpler to get started.
- DPM++ Sampler Support (Experimental): DPM++ support has been added! Please note that these are experimental, and are subject to change in the future as we continue to enhance our backend system.
—
Up Next
We are continually exploring a large set of ideas to make InvokeAI a better application with every release. Work is getting started to develop a modular backend architecture that will allow us to support queuing, atomic execution, easily add new features and more. We’ll also officially support SD2.0 soon.
If you are a developer who is currently using InvokeAI as your backend, we welcome you to join in on the conversation and provide feedback so we can build the best system possible.
–-
Whether you're a dev looking to build on or contribute to the project, a professional looking for pro-grade tools to incorporate into your workflow, or just looking for a great open-source SD experience, we're looking forward to you joining the community.
You can get the latest version on GitHub! https://github.com/invoke-ai/InvokeAI
Great support for M1, basically since the beginning. The install is painless.
Release video for InvokeAI 2.2: https://www.youtube.com/watch?v=hIYBfDtKaus
There are multiple stable diffusion installs you can do on your own [1] and run whatever wild queries you want.
It is very easy to get started, average technical person with OK connection could get it up and running in 15 minutes with something like https://github.com/invoke-ai/InvokeAI probably
This is an example of the original file: https://github.com/magnusviri/stable-diffusion/blob/79ac0f34...
Which seems to have been renamed, and cleaned up a bit here: https://github.com/magnusviri/stable-diffusion/blob/main/doc...
However, per the note on the magnusviri repo, the following repo should be used for a stable set of this SD Toolkit: https://github.com/invoke-ai/InvokeAI
with instructions here https://github.com/invoke-ai/InvokeAI/blob/main/docs/install...