What does HackerNews think of tree-of-thought-llm?

Official Implementation of "Tree of Thoughts: Deliberate Problem Solving with Large Language Models"

Language: Python

This is Shunyu, author of Tree oF Thoughts (arxiv.org/abs/2305.10601).

The official code to replicate paper results is https://github.com/ysymyth/tree-of-thought-llm

Not https://github.com/kyegomez/tree-of-thoughts which according to many who told me, is not right/good implementation of ToT, and damages the reputation of ToT

I explained the situation here: https://twitter.com/ShunyuYao12/status/1663946702754021383

I'd appreciate your help by unstaring his and staring mine, as currently Github and Google searches go to his repo by default, and it has been very misleading for many users.

I'm a little confused about what the relation is (if any) between the OP link and the repo from that paper: https://github.com/ysymyth/tree-of-thought-llm

Is it basically a reimplementation using Guidance instead of openai's API directly?

Here is a summary of the key points in the PDF:

Large language models are increasingly being used for general problem solving, but they are still limited by their token-level, left-to-right generation process.

The authors propose a new Tree of Thoughts (ToT) framework to address this. ToT frames problem solving as search through a tree of "thoughts", where each thought is an intermediate step towards the solution.

This allows the language model to:

- Generate and explore multiple potential thoughts at each step

- Evaluate the progress of different thoughts using self-evaluation prompts

- Perform lookahead and backtracking to make global decisions

The authors propose 3 tasks to test ToT: Game of 24, Creative Writing and Crosswords.

ToT significantly outperforms standard input-output and chain-of-thought prompting baselines on all 3 tasks. This shows the benefits of ToT's ability to explore, evaluate and search through different reasoning paths.

The key benefits of ToT are:

- Generality: It generalizes existing prompting methods

- Modularity: The components can be varied independently

- Adaptability: It can accommodate different problem properties and resource constraints

- Convenience: It only requires a pre-trained language model

The ToT framework shows how deliberate search through a tree of "thoughts" can help large language models solve problems that require planning and search, beyond their standard left-to-right generation process.

Code to be (isn't yet) released at: https://github.com/ysymyth/tree-of-thought-llm

// via: https://kagi.com/summarizer/index.html?url=https%3A%2F%2Farx...

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts: this https URL.

https://github.com/ysymyth/tree-of-thought-llm