I didn't see any info on how this is different than installing/running llamacpp or koboldcpp. New offerings are awesome of course, but what is it adding?

The main difference is setting everything up yourself manually, downloading the modal, optimizing the parameters for best performance, running an API server and a UI front-end - which is out of reach for most non-technical people. With LlamaGPT, it's just one command: `docker compose up -d` or one click install for umbrelOS home server users.

Agreed.

Gpt4all[1] offers a similar 'simple setup' but with application exe downloads, but is arguably more like open core because the gpt4all makers (nomic?) want to sell you the vector database addon stuff on top.

[1]https://github.com/nomic-ai/gpt4all

I like this one because it feels more private / is not being pushed by a company that can do a rug pull. This can still do a rug pull, but it would be harder to do.