Original Paper: https://arxiv.org/abs/2305.10601

By: Shunyu YaoDian YuJeffrey ZhaoIzhak ShafranThomas L. GriffithsYuan CaoKarthik Narasimhan

Abstract:

Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts:

this https URL


Summary Notes

image.png

Enhancing Language Models with the Tree of Thoughts Framework

The quest to develop artificial intelligence that not only understands language but can also solve complex problems is at the forefront of AI research. Language models like GPT-4 have shown impressive text generation abilities, but their problem-solving skills are still evolving.

The introduction of the Tree of Thoughts (ToT) Framework marks a significant step towards improving these abilities by enhancing systematic problem-solving beyond basic decision-making. This post explores the ToT Framework and its potential to revolutionize problem-solving in AI, particularly for AI Engineers in enterprise settings.

Understanding the Tree of Thoughts Framework

The ToT Framework is a cutting-edge tool for language models, designed to tackle tasks requiring advanced reasoning and decision-making. It builds on the "Chain of Thought" method, allowing for the exploration of various reasoning paths and enabling models to assess their own decision-making process.

With a tree-like structure where each node represents a "thought" or a coherent text unit, the framework aids in breaking down complex problems into manageable steps. It employs exploration techniques such as breadth-first search (BFS) or depth-first search (DFS), proving effective in various tasks from mathematical problems to creative writing.

Key Features and Achievements