What does HackerNews think of privateGPT?
Interact privately with your documents using the power of GPT, 100% privately, no data leaks
This resulted in Python package I call OnPrem.LLM.
In the documentation, there are examples for how to use it for information extraction, text generation, retrieval-augmented generation (i.e., chatting with documents on your computer), and text-to-code generation: https://amaiya.github.io/onprem/
Enjoy!
There exists (at least) a project to train and query an LLM on local documents: privateGPT - https://github.com/imartinez/privateGPT
It should provide links to the the source with the relevant content, to check the exact text:
> You'll need to wait 20-30 seconds (depending on your machine) while the LLM model consumes the prompt and prepares the answer. Once done, it will print the answer and the 4 sources it used as context from your documents
You will have noticed, in that first sentence, that it may not be practical, especially on an Orange Pi.
First, the name is similar to a better known repo: https://github.com/imartinez/privateGPT
just the P is capitalized.
Even while giving credits - instead of linking to the original repo trying to link to the creator to try and hide the origin.
How long do y'all think it'll take for production-ready projects like this to effectively be run on something like an M2 chip?