What does HackerNews think of autogen?

Enable Next-Gen Large Language Model Applications. Join our Discord: https://discord.gg/pAbnFJrkgZ

Language: Jupyter Notebook

When I saw this, I figured it was a clear step back from simply using plugins in the main ChatGPT view. It's basically plugins, but with extra prompting and you can only use one at a time.

But if you look at projects like Autogen ( https://github.com/microsoft/autogen ), you see one master agent coordinating other agents which have a narrower scope. But you have to create the prompts for the agents yourself.

This GPTs setup will crowd-source a ton of data on creating agents which serve a single task. Then they can train a model that's very good at creating other agents. Altman nods to this immediately after the GPTs feature is shown, repeating that OpenAI does not train on API usage.

Prediction: next year's dev day, which Altman hints will make today's conference look "quaint" by comparison, will basically be an app built around the autogen concept, but which you can spin up using very simple prompts to complete very complex tasks. Probably on top of a mixture of GPT 4 & 5.

Preprint from August: https://arxiv.org/abs/2308.08155

Docs: https://microsoft.github.io/autogen/docs/Research/

Repo: https://github.com/microsoft/autogen

Installs via pip: pip install pyautogen

Looks like another agent framework, of which I’ve tried several and been disappointed… but I still put trying this on my to-do list because you never know. Open Interpreter proved to be a pleasant surprise recently.