I don't see how the loading works for the end user's custom dataset. In fact, I find the layers of abstraction you have between getting the finetuning dataset and the actual training very opaque. I can't even tell where the dataset is coming from, it doesn't appear to be an example local to this repository.
I think a lot of people what something like... "drop .txt files of example data to train on in this /folder/ and run python finetune.py /folder/
This is actually what I was hoping for. For Web UI that you can load a model then load some data and hit train.
You can do this in the stable defusion UI to fine tune models with your own dataset
OobaBooga supports this kind of load-and-go LORA: https://github.com/oobabooga/text-generation-webui