What does HackerNews think of Auto-GPT?

An experimental open-source attempt to make GPT-4 fully autonomous.

Language: Python

#102 in Python
> but LLMs can’t do anything, really

Do you think https://github.com/Significant-Gravitas/Auto-GPT et al will become more performant as models improve?

For something with more capabilities, check out https://github.com/Significant-Gravitas/Auto-GPT

I recently used it to have it find a source of data and scrape it as a test, and it works. Took only 30 min or so.

Could have done it in under 5 min by myself, but taking 20s to write the prompt is still faster than spending the 5 min doing it myself

>They're not autonomous. They won't be autonomous.

https://github.com/Significant-Gravitas/Auto-GPT

>This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI.

>As an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.

>Who prompted the future LLM, and gave it access to a root shell and an A100 GPU, and allowed it to copy over some python script that runs in a loop and allowed it to download 2 terabytes of corpus and trained a new version of itself for weeks if not months to improve itself

Presumably someone running a misconfigured future version of autoGPT?

https://github.com/Significant-Gravitas/Auto-GPT

I don't think that doing the math by hand to simulate a machine intelligence makes it not intelligent, any more than doing a simulation of all the electrochemical signals in a human brain by hand would make a human not intelligent. Aside from the infeasible amount of time it would take, it's the same thing.

As for autonomy, LLMs don't have autonomy by themselves. But they can be pretty easily combined with other systems, connected to the outside world, in a way that seems pretty darned autonomous to me. (https://github.com/Significant-Gravitas/Auto-GPT). And that's basically a duct-tape-and-string version. Given how new these LLMs are, it's likely we're barely scratching the surface.

So...AutoGPT? Now with command-line access! Have fun :)

https://github.com/Significant-Gravitas/Auto-GPT/