How accurate is an LLM for this task? I was thinking of using one for analyzing free form PDF text to find a specific element, but I was worried about hallucinations.

Extractive tasks are part of where LLMs shine, and where you get the least amount of hallucination as long as you fine-tune your model.

By fine-tuning the model to extract a specific desired output from the text you give it, it learns that the output always comes from the input, and so you get less random outputs than just by prompting an instruction-tuned model (which was fine-tuned to find the answer in its weights, instead of copying it from the input).

I'm pretty ignorant on which is the best self hosted LLM for such a task or how to fine-tune it. Do you know of any resources on how to set that up?

It seems like llama2 is the biggest name on HN when it comes to self hosting but I have no idea how it actually performs.

You could just try it out if you have the hardware at home.

Grab KoboldCPP and a GGML model from TheBloke that would fit your RAM/VRAM and try it.

Make sure you follow the prompt structure for the model that you will see on TheBloke's download page for the model (very important).

KoboldCPP: https://github.com/LostRuins/koboldcpp

TheBloke: https://huggingface.co/TheBloke

I would start with a 13b or 7b model quantized to 4-bits just to get the hang of it. Some generic or story telling model.

Just make sure you follow the prompt structure that the model card lists.

KoboldCPP is very easy to use. You just drag the model file onto the executable, wait till it loads and go to the web interface.