What does HackerNews think of Open-Assistant?
OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.
- Open source fine tuned assistants like LAION Open-Assistant [1]
- inference optimizations like VoltaML, FlexGenm Distributed Inference [2]
- training optimizations like Hivemind [2]
1 https://github.com/LAION-AI/Open-Assistant
2 https://github.com/underlines/awesome-marketing-datascience/...
There's a lot of money to be made in hooking up these "brains" into apps. I hope people get on it, since it will be quite convenient.
See:
Act-1: https://news.ycombinator.com/item?id=32842860
Open-Assistant: https://github.com/LAION-AI/Open-Assistant
Toolformer: https://arxiv.org/abs/2302.04761
Understanding HTML with LLMs: https://arxiv.org/abs/2210.03945
I agree, there is a very serious risk of newer AI advances being closed by default, and I think it's not being taken seriously enough.
Even "open" players are still for-profit, say Huggingface. There really needs to be Linux or FSF like players that are motivated by freedom.
[0] https://github.com/LAION-AI/Open-Assistant [1] https://github.com/FMInference/FlexGen
There is also EleutherAI (https://www.eleuther.ai/about/) with GPT-NeoX (https://github.com/EleutherAI/gpt-neox).
Technically I think we're getting close to a good offline system. Home Assistant solves the practical aspect of controlling systems and triggered routines - it really works well, at least on par with vendor software. Speech recognition models are usable and getting better. The interesting "glue" is parsing the commands into something reasonable and I think open source LLMs are likely to be the answer. See Open Assistant [0] for example. It doesn't solve the knowledge base issue (e.g. people seem to use Alexa either as a very fancy egg timer, or for querying facts), but it would probably allow for very natural interaction with the device and the model could just translate a query into a command that triggers an action.
My impression is that Amazon's approach (I have no knowledge of what goes on inside) has been to build fairly complicated subsystems to handle very specific queries. Each "Skill" has to be separately developed and if you ask something outside a fairly rigid grammar, the device has no idea.
________
Related video by one of the contributors on how to help:
- https://youtube.com/watch?v=64Izfm24FKA
Source Code:
- https://github.com/LAION-AI/Open-Assistant
Roadmap:
- https://docs.google.com/presentation/d/1n7IrAOVOqwdYgiYrXc8S...
How you can help / contribute:
- https://github.com/LAION-AI/Open-Assistant#how-can-you-help
As others have pointed out, running a truly large language model like GPT-3 isn't (yet) feasible on your own hardware - you need a LOT of powerful GPUs racked up in order to run inference.
https://github.com/bigscience-workshop/petals is a really interesting project here: it works a bit like bittorrent, allowing you to join a larger network of people who share time on their GPUs, enabling execution of models that can't fit on a single member's hardware.
cool stuff, thanks
So I just join them, so far its quite an active community working on pushing out the initial 0.1 version.