What does HackerNews think of guidance?

A guidance language for controlling large language models.

Language: Jupyter Notebook

This looks nice!

I frequently am putting together complex chains of prompts and found microsoft guidance to be quite nice. Have you looked at it?

It feels like the abstraction I'd like the most is some way to combine few shot prompting, voting and chain-of-thought.

I have to create a prompt chain like this fairly often: write a description for this thing, given these previous input and outputs as examples (few shot). Do it N times with high randomness, maybe with expert personas, then review the solutions one by one with pros and cons (chain of thought) and then use all that to create a better description (final answer).

Right now, I always have to write out and connect all the steps, but its fairly rote and I think other prompting chains have a similar repetitiveness to them.

https://github.com/microsoft/guidance

Here's one thing I don't get.

Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...

...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.

This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance

But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!

I'm very surprised that they're not using `guidance` [0] here.

It not only would allow them to suggest that required fields be completed (avoiding the need for validation [1]) and probably save them GPU time in the end.

There must be a reason and I'm dying to know what it is! :)

Side-note, I was in the process of building this very thing and good ol' Misrocoft just swung in and ate my lunch.. :/

[0] https://github.com/microsoft/guidance

[1] https://github.com/microsoft/TypeChat/blob/main/src/typechat...

It's not super clear how this differs from another recently released library from Microsoft: Guidance (https://github.com/microsoft/guidance).

They both seem to aim to solve the problem of getting typed, valid responses back from LLMs

I'm not familiar with how TypeChat works, but Guidance [1] is another similar project that can actually integrate into the token sampling to enforce formats.

[1]: https://github.com/microsoft/guidance

Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?

I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].

[0]https://github.com/microsoft/guidance

LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.

So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.

Also LangChain has a lot of integrations, that just pop up as soon as new API for anything LLM pops up, so that helps with new user onboarding as well.

Microsoft guidance is legit and useful. It's a bunch of prompting features piled on top of handlebar syntax. ( And it has its own caching. Set temp to 0 and it caches. no need for LLM specific caching libs :) )

https://github.com/microsoft/guidance

Wouldnt this be possible with a solution like Guidance where you have a pre structured JSON format ready to go and all you need is text: https://github.com/microsoft/guidance