What does HackerNews think of peft?

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Language: Python

#124 in Python
Is it?

Literally every example I've seen so far is completely unversioned and mere weeks after being written simply doesn't work as a direct consequence.

E.g: https://github.com/oobabooga/text-generation-webui/blob/ee68...

Take this line:

    pip3 install torch torchvision torchaudio
Which version of torch is this? The latest.

    FROM nvidia/cuda:11.8.0-runtime-ubuntu22.04
Which version of CUDA is this? An incompatible one, apparently. Game over.

Check out "requirements.txt":

    accelerate==0.18.0
    colorama
    datasets
    flexgen==0.1.7
    gradio==3.25.0
    markdown
    numpy
    pandas
    Pillow>=9.5.0
    pyyaml
    requests
    rwkv==0.7.3
    safetensors==0.3.0
    sentencepiece
    tqdm
Wow. Less than half of those have any version specified. The rest? "Meh, I don't care, whatever."

Then this beauty:

    git+https://github.com/huggingface/peft
I love reaching out to the Internet in the middle of a build pipeline to pull the latest commit of a random repo, because that's so nice and safe, scalable, and cacheable in an artefact repository!

The NPM ecosystem gets regularly excoriated for the exact same mistakes, which by now are so well known, so often warned against, so often exploited, so regularly broken that it's getting boring.

It's like SQL injection. If you're still doing it in 2023, if your site is still getting hacked because of it, then you absolutely deserve to be labelled immature and even childish.

I haven't done this with sentence transformers but I imagine it's possible since they can be loaded as regular transformers.

Check out https://github.com/huggingface/peft -- they've packaged it up nicely- and read up on LoRA (https://arxiv.org/pdf/2106.09685.pdf) That should get you started.

> With no real knowledge of LLM and only recently started to understand what LLM terms mean, such as 'model, inference, LLM model, intruction set, fine tuning' whatelse do you think is required to make a took like yours?

This was mee a few weeks ago. I got interested in all this when FlexGen (https://github.com/FMInference/FlexGen) was announced, which allowed to run inference using OPT model on consumer hardware. I'm an avid user of Stable Diffusion, and I wanted to see if I can have an SD equivalent of ChatGPT.

Not understanding the details of hyperparameters or terminology, I basically asked ChatGPT to explain to me what these things are:

   Explain to someone who is a software engineer with limited knowledge of ML terms or linear algebra, what is "feed forward" and "self-attention" in the context of ML and large language models. Provide examples when possible.
I did the same with all the other terms I didn't understand, like "ADAM optimizer", "gradient", etc. I relied on it very heavily and cross-referenced the answers.

Looking at other people's code and just tinkering with things on my own really helped.

Through the FlexGen discord I've discovered https://github.com/oobabooga/text-generation-webui where I spent days just playing around with models. This got me into the huggingface ecosystem -- their transformers library is an easy way to get started. I joined a few other discords, like LLaMA Unofficial, RWKV, Eleuther AI, Together, Hivemind and Petals.

I bookmarked a bunch of resources but it's very sporadic. Here are some:

- https://github.com/zphang/minimal-llama/#peft-fine-tuning-wi...

- https://github.com/togethercomputer/OpenChatKit

- https://www.cstroik.com/index.php/2023/02/18/finetuning-an-a...

- https://github.com/huggingface/peft

- https://github.com/kingoflolz/mesh-transformer-jax/blob/mast...

- https://github.com/oobabooga/text-generation-webui

- https://github.com/hizkifw/WebChatRWKVstic

- https://github.com/ggerganov/whisper.cpp

- https://github.com/qwopqwop200/GPTQ-for-LLaMa

- https://github.com/oobabooga/text-generation-webui/issues/14...

- https://github.com/bigscience-workshop/petals

- https://github.com/alpa-projects/alpa