Stop Thinking, Just Do!

Sungsoo Kim's Blog

Agentic Reasoning for Large Language Models

tagsTags

5 February 2026


Agentic Reasoning for Large Language Models (Jan 2026)

Abstract

Summary:

This survey systematizes the paradigm shift of Large Language Models (LLMs) from static text generators to autonomous agents capable of planning, acting, and learning. The authors organize agentic reasoning into three layers: Foundational Reasoning (planning, tool use, search), Self-Evolving Reasoning (feedback, memory, adaptation), and Collective Multi-Agent Reasoning (collaboration and coordination). The paper distinguishes between in-context reasoning (inference-time orchestration) and post-training reasoning (optimization via RL/SFT). It extensively reviews applications across domains such as scientific discovery, robotics, coding, and healthcare, while identifying key future challenges including long-horizon interaction, world modeling, and governance.

Key Topics:

  • Agentic Reasoning
  • Foundational Capabilities
  • Self-evolving Reasoning
  • Multi-agent Systems
  • In-context vs Post-training Reasoning
  • Agentic Memory
  • Tool Use and Planning
  • Embodied Agents
  • Scientific Discovery Agents