What does HackerNews think of NeMo-Guardrails?

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.

Language: Python

When you're building applications on top of LLMs, there are a number of central problems that you're trying to solve and this is one of them. Solutions are numerous and widely variable, everything from basic regex parsing to fine-tuning validator models to new programming/modeling languages. Here's some examples:

  - https://github.com/microsoft/guidance
  - https://github.com/NVIDIA/NeMo-Guardrails/
  - https://github.com/r2d4/rellm
  - https://shreyar.github.io/guardrails/
  - https://lmql.ai/
  - https://github.com/jbrukh/gpt-jargon
Yes, this is feasible.

Look into https://github.com/NVIDIA/NeMo-Guardrails and specifically to your question there are "topical rails" to ensure the conversation stays on a set of topics you greenlighted.

Also takes care of jailbreaks and allows custom conversation flow templates.

Yeah, there are so many platforms out there. It is incredibly tough to figure out who will be heading in the right direction.

- https://shreyar.github.io/guardrails/ - https://github.com/NVIDIA/NeMo-Guardrails - https://www.askmarvin.ai/

Thanks, I hadn't seen those. I did find https://github.com/NVIDIA/NeMo-Guardrails earlier but haven't looked into it yet.

I'm not sure it solves the problem of restricting the information it uses though. For example, as a proof of concept for a customer, I tried providing information from a vector database as context, but GPT would still answer questions that were not provided in that context. It would base its answers on information that was already crawled from the customer website and in the model. That is concerning because the website might get updated but you can't update the model yourself (among other reasons).