What does HackerNews think of gpt4all?

gpt4all: an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue

Language: C++

I run one. I built an iMessage-like frontend to it using plain JS and a Python websocket backend. I use it mostly for curiosity and playing with different prompts. I only have 16GB of RAM to dedicate to it, so I use an 13B parameter model which is enough for fun and chitchat, but I don't find it good enough to replace ChatGPT. In the future I would like to upgrade to Llama2 and observe the difference.

https://github.com/nomic-ai/gpt4all

Agreed.

Gpt4all[1] offers a similar 'simple setup' but with application exe downloads, but is arguably more like open core because the gpt4all makers (nomic?) want to sell you the vector database addon stuff on top.

[1]https://github.com/nomic-ai/gpt4all

I like this one because it feels more private / is not being pushed by a company that can do a rug pull. This can still do a rug pull, but it would be harder to do.

The most recent gpt4all (https://github.com/nomic-ai/gpt4all) includes a local server compatible with OpenAPI -- this could be a useful start!
You might want to try some of the discord channels connected to some of the repos. i.e. GPT4All https://github.com/nomic-ai/gpt4all scroll down for the discord link.
> I wonder if AI will ever scale down to personal hardware

Happened last week. Download here.[1]

https://github.com/nomic-ai/gpt4all

Yeah, that's possible now. You don't actually need a GPU to run inference against some of the smaller language models - I've heard reports of them running on a RaspberryPi.

The trick for the moment is to skip Python though. lambda.cpp and its many variants are the ones that I've heard working best.

I suggest starting with LLaMA 7B or Alpaca. More notes here: https://simonwillison.net/tags/homebrewllms/

This one is the easiest to get working I think: https://github.com/nomic-ai/gpt4all

cool.

now integrate your toy with https://github.com/nomic-ai/gpt4all

then we can have fully offline chat and databases

Not to be confused with gpt4all https://github.com/nomic-ai/gpt4all which is a "free" GPT LLM.

It seems this gpt4free was basically hijacking 3rd parties services that use GPT-4, bypassing the official OpenAI APIs in order to avoid paying for inference. Of course, that means that the hijacked 3rd parties are the ones footing the bill...

I'm not surprised they have been issued a takedown notice.

It's made by the same people here https://github.com/nomic-ai/gpt4all who initially distributed a LLaMA based model finetuned for conversations over torrent. I think you're overly critical. GPT4All naming is fine in my opinion, it's literally "GPT for all", but I can see why people would dislike it.
Depends on whether you trust their base project https://github.com/nomic-ai/gpt4all . gpt4all is actively distributed over torrent, I actually have over 2.0 ratio
This is great, but similar to GPT4All, it will likely be deemed unusable for any commercial or otherwise "legitimate" use cases since it's trained on OpenAI completions from sharegpt.com.

https://github.com/nomic-ai/gpt4all

https://github.com/nomic-ai/gpt4all

This is one of many, many open LLM models that are approaching GPT level now. They're not on the same level yet, but given the rate of progress in this field, it's safe to say that OpenAI's moat will not be there forever.

Also, their level is already sufficient for A LOT of tasks that people use OpenAI's APIs now.