Looks awesome. Any plans to allow for it to use local LLMs (like llama) instead of openai APIs?

Great question, and something I’ve thought about.

The main issue right now is that I’m relying on the OpenAI API’s function-calling capabilities to enable the use of functions in workflows.

If we switch to other LLMs, we’ll need to create some additional wrapping around them to allow such function calling. (As far as I know. If any open source LLMs already have function calling built in, let me know.)

LangChain (which I’m not using due to personally finding it overcomplicated and too obfuscating) does have function calling, but it uses approaches like REACT (if I remember correctly) that aren’t as reliable as OpenAI’s approach.

There is probably use for go-skynet/LocalAI[0] or lm-sys/FastChat[1] which can emulate an OpenAI API using local models.

0: https://github.com/go-skynet/LocalAI 1: https://github.com/lm-sys/FastChat/

Edit: idk if any of this support function calling tho