Evolving Deeper LLM Thinking

Published
View publication Download

Abstract

We explore scaling LLM inference-time compute for massive exploration and iterative refinement. We propose Mind Evolution -- an evolutionary thinking process that leverages an LLM to generate, cross over, refine, and select solutions in a genetic algorithm framework. We show how such approach can be strikingly effective in solving complex problems. On natural language planning, it solves over 99% of the TravelPlanner and Natural Plan tasks within fixed maximum compute budgets, significantly surpassing all baselines and previous state of the art results. Like most evolutionary methods, Mind Evolution is easy to scale through parallelization, and we demonstrate that performance improves consistently with increased inference-time compute.

Authors

Kuang-Huei Lee*†, Ian Fischer*, Yueh-Hua Wu, Dave Marwood, Shumeet Baluja, Dale Schuurmans, Xinyun Chen (*First author contribution, †senior author contribution)

*
External author

Venue

arXiv