How does this compare to LangChain?
I don't know how its performance compares, but its architecture is completely different: LangChain is a "normal" software library, but Gorilla is itself an LLM:
> Gorilla is a LLM that can provide appropriate API calls. It is trained on three massive machine learning hub datasets: Torch Hub, TensorFlow Hub and HuggingFace. We are rapidly adding new domains, including Kubernetes, GCP, AWS, OpenAPI, and more. Zero-shot Gorilla outperforms GPT-4, Chat-GPT and Claude. Gorilla is extremely reliable, and significantly reduces hallucination errors.
My reading of that abstract is that it's an LLM that outputs API calls instead of natural language (or maybe it still outputs natural language, but it can use API calls during inference? I didn't read very far), whereas LangChain is simply a software library. In theory, you could probably get Gorilla to output LangChain "API" (function) calls...
"We release Gorilla, a finetuned LLaMA-based model that surpasses the performance of GPT-4 on writing API calls"
Sounds like it's another LLaMA variant specifically fine tuned for API calls.
Good point. This was the original release, we now also have Apache-2.0 licensed models finetuned on MPT-7B and Falcon-7B!