Large Language Models As Evolution Strategies

education
production
architectures
prompt-engineering
Large language models can perform evolutionary optimization algorithms without explicit task specification.
Author

Robert Tjarko Lange, Yingtao Tian, Yujin Tang

Published

February 28, 2024

Summary:

  • Large language models (LLMs) are investigated to determine if they can implement evolutionary optimization algorithms.
  • A novel prompting strategy is introduced to enable LLMs to propose improvements to the mean statistic for black-box optimization.
  • Empirical findings show that the setup allows for an LLM-based evolution strategy, EvoLLM, to outperform baseline algorithms on synthetic BBOB functions and small neuroevolution tasks.

Major Findings:

  1. LLMs can act as ‘plug-in’ in-context recombination operators for evolutionary optimization algorithms.
  2. EvoLLM robustly outperforms baseline algorithms such as random search and Gaussian Hill Climbing on synthetic BBOB functions and small neuroevolution tasks.
  3. The performance of EvoLLM is influenced by the model size, prompt strategy, and context construction.

Analysis and Critique:

  • The study provides valuable insights into the potential of LLMs for implementing evolutionary optimization algorithms.
  • The findings suggest that LLMs can be leveraged for autonomous optimization, but further research is needed to understand the impact of pretraining and fine-tuning protocols on EvoLLM’s performance.
  • The study highlights the importance of careful solution representation and context construction for LLM-based optimization.
  • The potential ethical considerations of using LLMs for autonomous optimization purposes are acknowledged, emphasizing the need for careful monitoring of their agency.

Appendix

Model gpt-3.5-turbo-1106
Date Generated 2024-02-29
Abstract https://arxiv.org/abs/2402.18381v1
HTML https://browse.arxiv.org/html/2402.18381v1
Truncated False
Word Count 7331