I've been thinking of this. It's just fascinating to me to have a small device that you can converse with and knows almost everything. Perfect for preppers / survivalists. Store it in a faraday cage along with a solar generator.

> knows almost everything

It really doesn't. It doesn't even know what it knows and what it doesn't know. Without ways to check up on whether what it told you is true or not you may well end up in more trouble than where you were before.

How about a local wikipedia dump, with precalculated embeddings? Then you can perform a similarity search first and feed the results to the LLM.

It’s less likely to hallucinate this way.

> a local wikipedia dump

There exists (at least) a project to train and query an LLM on local documents: privateGPT - https://github.com/imartinez/privateGPT

It should provide links to the the source with the relevant content, to check the exact text:

> You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents

You will have noticed, in that first sentence, that it may not be practical, especially on an Orange Pi.