What does HackerNews think of Open-Assistant?

OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

Language: Python

#1 in R
#15 in Python
LLama hasn't been fine-tuned with RLHF, so it requires additional prompting, check out the open-assistant[0] project for an open-source ChatGPT equivalent (WIP).

[0]: https://github.com/LAION-AI/Open-Assistant

There is a project along the lines you mention but it's still under development

https://github.com/LAION-AI/Open-Assistant

The folks making Open Assistant [1] (opensource ChatGPT clone) gathered enough data to start initial training, so hopefully there will be something to play with soon.

[1] https://github.com/LAION-AI/Open-Assistant

I was looking for open alternatives to self hosted, or crowd hosted finetuned LLMs like ChatGPT and found LAION Open Assistant. Then found resources to further optimize inference as well as training:

- Open source fine tuned assistants like LAION Open-Assistant [1]

- inference optimizations like VoltaML, FlexGenm Distributed Inference [2]

- training optimizations like Hivemind [2]

1 https://github.com/LAION-AI/Open-Assistant

2 https://github.com/underlines/awesome-marketing-datascience/...

Hooking ChatGPT (and similar architectures) into applications is going to be the next big thing. It just takes time to hook up various systems. Eventually a vision module will be integrated and it will probably be able to "figure out" most systems.

There's a lot of money to be made in hooking up these "brains" into apps. I hope people get on it, since it will be quite convenient.

See:

Act-1: https://news.ycombinator.com/item?id=32842860

Open-Assistant: https://github.com/LAION-AI/Open-Assistant

Toolformer: https://arxiv.org/abs/2302.04761

Understanding HTML with LLMs: https://arxiv.org/abs/2210.03945

LAION is working on an open chatgpt like model [0] and there is a project discussed here recently that is about running the llms on single gpus [1]

I agree, there is a very serious risk of newer AI advances being closed by default, and I think it's not being taken seriously enough.

Even "open" players are still for-profit, say Huggingface. There really needs to be Linux or FSF like players that are motivated by freedom.

[0] https://github.com/LAION-AI/Open-Assistant [1] https://github.com/FMInference/FlexGen

I'm stupid in this area and have only a very broad understanding of "AI". But isn't this already happening in this project? https://github.com/LAION-AI/Open-Assistant
Open Assistant (started by some of the people that started Stable Diffusion I think?) is very early, but looks very promising.

https://open-assistant.io/

https://github.com/LAION-AI/Open-Assistant

I think this is a sad way to go. Hardware is super expensive to develop and the key IP here is in the software. There's probably some fancy microphone tech, but ultimately you could (should?) run Mycroft on a Raspberry Pi with an edge accelerator device.

Technically I think we're getting close to a good offline system. Home Assistant solves the practical aspect of controlling systems and triggered routines - it really works well, at least on par with vendor software. Speech recognition models are usable and getting better. The interesting "glue" is parsing the commands into something reasonable and I think open source LLMs are likely to be the answer. See Open Assistant [0] for example. It doesn't solve the knowledge base issue (e.g. people seem to use Alexa either as a very fancy egg timer, or for querying facts), but it would probably allow for very natural interaction with the device and the model could just translate a query into a command that triggers an action.

My impression is that Amazon's approach (I have no knowledge of what goes on inside) has been to build fairly complicated subsystems to handle very specific queries. Each "Skill" has to be separately developed and if you ask something outside a fairly rigid grammar, the device has no idea.

[0] https://github.com/LAION-AI/Open-Assistant

They made T5 and released the weights, while T5 "only" goes up to 11B params, it's an efficient model for a wide variety of tasks and the base for a lot of other models/projects. Of note imagen[1] and open-assistant[2] use T5 and FlanT5.

[1] https://imagen.research.google/

[2] https://github.com/LAION-AI/Open-Assistant

TLDR: OpenAssistant is a chat-based assistant that understands tasks, can interact with third-party systems, and retrieve information dynamically to do so.

________

Related video by one of the contributors on how to help:

- https://youtube.com/watch?v=64Izfm24FKA

Source Code:

- https://github.com/LAION-AI/Open-Assistant

Roadmap:

- https://docs.google.com/presentation/d/1n7IrAOVOqwdYgiYrXc8S...

How you can help / contribute:

- https://github.com/LAION-AI/Open-Assistant#how-can-you-help

Open-Assistant - https://github.com/LAION-AI/Open-Assistant - is an interesting open source project I found this morning.

As others have pointed out, running a truly large language model like GPT-3 isn't (yet) feasible on your own hardware - you need a LOT of powerful GPUs racked up in order to run inference.

https://github.com/bigscience-workshop/petals is a really interesting project here: it works a bit like bittorrent, allowing you to join a larger network of people who share time on their GPUs, enabling execution of models that can't fit on a single member's hardware.

There’s LAION working on open source[1] version of chatGPT

[1] https://github.com/LAION-AI/Open-Assistant

When chatGPT first came out, my first thought was to replicate it myself. But then, I have too many missing skills or lack the time for backend, frontend and deployment. So I found LAION started an initiative for open-assistant. https://github.com/LAION-AI/Open-Assistant

So I just join them, so far its quite an active community working on pushing out the initial 0.1 version.