What does HackerNews think of guidance?

A guidance language for controlling large language models.

Language: Jupyter Notebook

AutoGen (https://github.com/microsoft/autogen) is orthogonal: it's designed for agents to converse with each other.

The original comparison to LangChain from Microsoft was Guidance (https://github.com/guidance-ai/guidance) which appears to have shifted development a bit. I haven't had much experience with it but from the examples it still seems like needless overhead.

Thanks for your comment.

I did not know about "Betteridge's law of headlines", quite interesting. Thanks for sharing :)

You raise some interesting points.

1) Safety: It is true that LVMs and LLMs have unknown biases and could potentially create unsafe content. However, this is not necessarily unique to them, for example, Google had the same problem with their supervised learning model https://www.theverge.com/2018/1/12/16882408/google-racist-go.... It all depends on the original data. I believe we need systems on top of our models to ensure safety. It is also possible to restrict the output domain of our models (https://github.com/guidance-ai/guidance). Instead of allowing our LVMs to output any words, we could restrict it to only being able to answer "red, green, blue..." when giving the color of a car.

2) Cost: You are right right now LVMs are quite expensive to run. As you said are a great way to go to market faster but they cannot run on low-cost hardware for the moment. However, they could help with training those smaller models. Indeed, with see in the NLP domain that a lot of smaller models are trained on data created with GPT models. You can still distill the knowledge of your LVMs into a custom smaller model that can run on embedded devices. The advantage is that you can use your LVMs to generate data when it is scarce and use it as a fallback when your smaller device is uncertain of the answer.

3) Labelling data: I don't think labeling data is necessarily cheap. First, you have to collect the data, depending on the frequency of your events could take months of monitoring if you want to build a large-scale dataset. Lastly, not all labeling is necessarily cheap. I worked at a semiconductor company and labeled data was scarce as it required expert knowledge and could only be done by experienced employees. Indeed not all labelling can be done externally.

However, both approaches are indeed complementary and I think systems that will work the best will rely on both.

Thanks again for the thought-provoking discussion. I hope this answer some of the concerns you raised

I've had a bit of trouble getting function calling to work with cases that aren't just extracting some data from the input. The format is correct but it was harder to get the correct data if it wasn't a simple extraction.

Hopefully OpenAI and others will offer something like https://github.com/guidance-ai/guidance at some point to guarantee overall output structure.

Failed validations will retry, but from what I've seen JSONSchema + generated JSON examples are decently reliable in practice for gpt-3.5-turbo and extremely reliable on gpt-4.

This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.

https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.

Other than fine-tuning and RAG, Guidance allows you to constrain the output of an LLM within a grammar, for example to guarantee JSON output 100% of the time.

Here's one library to do this https://github.com/guidance-ai/guidance

True. This is an open area of research. Tools like guidance (or other implementations of constrained decoding with llms [1,2]) will likely help improve this problem.

[1] A guidance language for controlling large language models. https://github.com/guidance-ai/guidance

[2] Knowledge Infused Decoding https://arxiv.org/abs/2204.03084

I'm not 100% sure they're talking about this specifically but logit control/manipulation is often used to conform to a specific schema.

https://github.com/guidance-ai/guidance

I'm going to butcher this explanation - after you've generated your selection of logits but before you sample from them, you check which ones conform to your schema. If you want the only two options to be "true" or "false", then you take any of the logits that would provide invalid answers and lower their probabilities manually.

Another example is structures like JSON can be validated so when your sample is "{'name':'Carl'" you lower the probability of "{" since that would invalidate the json. In fact the only valid ones you'd likely have left would be ",", " ", and "}"

You should probably look into Guidance [1](previously Microsoft Guidance but looks like it’s been separated from their main organization), which is a language for controlling the output of LLMs (so you can, among many other things, output JSON in a deterministic way)

[1]: https://github.com/guidance-ai/guidance

OpenAI has this capability built in with functions[0], I believe! Building my own project[1] I have implemented functions in combination with guidance[2] and haven’t had a hiccup yet! I have a JSON parser function there, just in case, but it seems to be working reliably.

Here’s a bit more of a description of using the functions API for JSON returns: https://yonom.substack.com/p/native-json-output-from-gpt-4

[0] https://openai.com/blog/function-calling-and-other-api-updat...

[1] https://resgen.app

[2] https://github.com/guidance-ai/guidance

LQML (and guidance https://github.com/guidance-ai/guidance) are much more inefficient. They loop over the entire vocabulary at each step, we only do it once at initialization.