1

arXiv:2509.08269v4 Announce Type: replace
Abstract: Large Language Models (LLMs) possess substantial reasoning capabilities and are increasingly applied to optimization tasks, particularly in synergy with evolutionary computation. However, while recent surveys have explored specific aspects of this domain, they lack an integrative perspective that connects problem modeling with solving workflows. To address this gap, we present a systematic review of recent developments and organize them within a structured framework. First, we classify existing research into two primary stages: LLMs for optimization modeling and LLMs for optimization solving. Second, we divide the latter into three paradigms based on the role of the LLM: stand-alone optimizers, low-level components embedded within algorithms, and high-level managers for algorithm selection and generation. Third, for each category, we analyze representative methods, distill technical challenges, and examine their interplay with traditional approaches. Finally, we review interdisciplinary applications across the natural sciences, engineering, and machine learning. Based on this analysis, we highlight key limitations and point toward future directions for developing self-evolving agentic ecosystems. An up-to-date collection of related literature is maintained at https://github.com/ishmael233/LLM4OPT.
Be respectful and constructive. Comments are moderated.

No comments yet.