I'm curious if this will give better results than llama 7B? Llama 7B felt like a toy that, while cool to be able to run locally, did not feel useful in any way when contrasted to the state of GPT. Here's hoping for better and/or release of larger parameter models with low performance requirements soon :)
EDIT: my first question times out when ran online, seems like huggingface is getting hugged to death.
Even if it doesn't initially, the fact that it's being released so permissively is massive - stable diffusion was made far more powerful by being hackable at all levels and I can't imagine we won't see the same here.
I imagine things like control nets that restrict output to parsable types, LoRa style adaptations that allow mixable "attitudes", that sort of thing.
Very different underlying architecture from diffusers, ofc. But the action of open source is the same - a million monkeys with a million xterms and so forth.
I'm really hoping for the ability to load in different sets of trained material as embeddings/textual inversions like in Stable Diffusion. Imagine scanning in some of your favorite philosophy and design books and throwing them with small weighting as a little flavor to your answer. The crossovers between LLM and Stable Diffusion type models (like Loras) is such a fascinating space to explore.