What does HackerNews think of langchain?
⚡ Building applications with LLMs through composability ⚡
Edward de Bono, the person who coined the term lateral thinking, lists random juxtaposition as one of the tools for spurring creativity. [1], [2]
In his Six Thinking Hats [2], he writes about 6 different modes of thinking coded by colored hats.
- Blue hat: Organizing
- White hat: Information, facts, stats
- Green hat: Creative ideas
- Yellow hat: Benefits and positives
- Black hat: Caution, risks, negatives
- Red hat: Emotions
He asks us (a team) to look at a problem (he calls it Focus. For e.g., Focus: AGI in military use) wearing one hat at a time. So, when we all wear White hat, we bring data, citations, previous relevant work etc., We don't expend energy in evaluating this data at the moment (that comes later when we wear a different hat, i.e., the black hat).
His theory is that we can think better with the Six Thinking Hats method.
So, applying this analogy to LLMs, hallucinations of LLMs can be thought of as the LLM wearing a green hat.
A theorem prover or fact checker can be added to act as a black hat. (LLMs themselves are capable of doing this critical review -- for eg., list 5 points for and against fossil fuels).
Extending this analogy further, we have tools like LangChain [3] that are focused on the organizing bit (blue hat), ChatGPT plugins that provide up-to-date information, run computations or use 3rd party services (white hat).
Green and Yellow hats are out-of-the-box supported by LLMs already.
Red hat is a sentimental analyzer (which is a classic machine learning algorithm) that LLMs already subsume.
So, it is just a matter of time before this gets refined and more useful that we don't have to worry about the hallucination coming in the way.
[1]: https://www.amazon.com/Serious-Creativity-Thoughts-Reinventi...
[2]: https://www.amazon.com/Six-Thinking-Hats-Edward-Bono/dp/0316...
> The mental model we build and update is not just based on a linear stream, but many parallel and even contradictory sensory inputs
So just like multimodal language models, for instance GPT-4?
> as experiences in a world of which we are part of.
> The simple fact that we don't just complete streams, but do so with goals, both immediate and long term, and fit our actions into these goals
Unfalsifiable! GPT-4 can talk about its experiences all day long. What's more, GPT-4 can act agentic if prompted correctly. [2] How do you qualify a "real goal"?
[1] https://www.neelnanda.io/mechanistic-interpretability/othell...
See also the ARC paper where the model was capable of recruiting and convincing a TaskRabbit worker to solve captchas.
I think many people make the mistake to see raw LLMs as some sort of singular entity when in fact, they’re more like a simulation of a text based “world” (with multimodal models adding images and other data). The LLM itself isn’t an agent and doesn’t “will” anything, but it can simulate entities that definitely behave as if they do. Fine-tuning and RLHF can somewhat force it into a consistent role, but it’s not perfect as evidenced by the multitude of ChatGPT and Bing jailbreaks.
https://github.com/jerryjliu/llama_index
and/or
Building applications with LLMs through composability
PS: not tried myself.
For answering "queries", it appears like it iterates over the documents in the store, i.e., NOT using it like an index, and feeding each document as part of the context into the LLM.