LLM-A*: Large Language Model Enhanced Incremental Heuristic Search
on Path Planning

University of California, Los Angeles
2024 EMNLP Findings

We propose LLM-A* ๐Ÿš€, a novel path planning algorithm that combines the strengths of large language models (LLMs) and A* search, leveraging the global reasoning capabilities of LLMs to guide the search process, significantly reducing the number of visited states and improving efficiency.

GIF 1 GIF 2

Abstract

Path planning is a fundamental scientific problem in robotics and autonomous navigation, requiring the derivation of efficient routes from starting to destination points while avoiding obstacles. Traditional algorithms like A* and its variants are capable of ensuring path validity but suffer from significant computational and memory inefficiencies as the state space grows. Conversely, large language models (LLMs) excel in broader environmental analysis through contextual understanding, providing global insights into environments. However, they fall short in detailed spatial and temporal reasoning, often leading to invalid or inefficient routes. In this work, we propose LLM-A*, an new LLM based route planning method that synergistically combines the precise pathfinding capabilities of A* with the global reasoning capability of LLMs. This hybrid approach aims to enhance pathfinding efficiency in terms of time and space complexity while maintaining the integrity of path validity, especially in large-scale scenarios. By integrating the strengths of both methodologies, LLM-A* addresses the computational and memory limitations of conventional algorithms without compromising on the validity required for effective pathfinding.

LLM-A* Algorithm Pseudocode

LLM-A* Algorithm Pseudocode

An Comparison Between LLM-A* and A* in Computation and Memory Efficiency During Pathfinding

Comparison of LLM-A* and A*

LLM-A* leverages target states generated by large language models (LLMs) as waypoints to guide the searching process, significantly reducing the number of visited states. This leads to fewer operations and storage usage compared to A*. For above example, LLM-A* identifies the optimal path with only 140 operations, less than one-fifth of the 859 operations required by A*, as well as reduction in storage. LLM-A* dynamically adjusts heuristic values derived from LLM-generated waypoints, in addition to standard heuristics from A*. This dynamic adjustment allows LLM-A* to steer the search direction towards areas deemed more favorable by the large model at various stages of the search. During the search, each time the target state changes, heuristic values for all previously reached states are recalculated. This process helps LLM-A* efficiently guide the search process, continually optimizing the pathfinding efficiency.

Performance of Path Planning

The A* algorithm serves as the baseline, with an index value of 100 indicating performance equivalent to A*.
The methodologies are evaluated on 1000 samples in maps 50 ร— 30 of original map sizes.

Method Operation Ratio Storage Ratio Relative Path Length
A* Algorithm 100 100 100
LLM-A* Algorithm w/ GPT-3.5 (Standard) 57.39 74.96 102.44
LLM-A* Algorithm w/ GPT-3.5 (CoT) 69.50 83.65 102.54
LLM-A* Algorithm w/ GPT-3.5 (RePE) 85.47 96.53 102.41
LLM-A* Algorithm w/ LLAMA3 (Standard) 44.59 64.02 102.47
LLM-A* Algorithm w/ LLAMA3 (CoT) 47.60 66.27 102.46
LLM-A* Algorithm w/ LLAMA3 (RePE) 64.08 80.19 102.54

Scalability

The comparative analysis examines the computational and memory efficiency between A* and LLM-A* (incorporating LLAMA3 with few-shot prompting) across scaled environments ranging from 1 to 10 times enlargement. A* exhibits exponential growth in both (a) OPERATION and (b) STORAGE with linear increasing, environment scale, in contrast, LLM-A* achieves a near linear scalability.

MY ALT TEXT

Paper

BibTeX

@article{meng2024llm,
  title={LLM-A*: Large Language Model Enhanced Incremental Heuristic Search on Path Planning},
  author={Meng, Silin and Wang, Yiwei and Yang, Cheng-Fu and Peng, Nanyun and Chang, Kai-Wei},
  journal={arXiv preprint arXiv:2407.02511},
  year={2024}
}