Agentic Reasoning for Large Language Models (Jan 2026)
Abstract
- Title: Agentic Reasoning for Large Language Models (Jan 2026)
- Link: http://arxiv.org/abs/2601.12538v1
- Date: January 2026
Summary:
This survey systematizes the paradigm shift of Large Language Models (LLMs) from static text generators to autonomous agents capable of planning, acting, and learning. The authors organize agentic reasoning into three layers: Foundational Reasoning (planning, tool use, search), Self-Evolving Reasoning (feedback, memory, adaptation), and Collective Multi-Agent Reasoning (collaboration and coordination). The paper distinguishes between in-context reasoning (inference-time orchestration) and post-training reasoning (optimization via RL/SFT). It extensively reviews applications across domains such as scientific discovery, robotics, coding, and healthcare, while identifying key future challenges including long-horizon interaction, world modeling, and governance.
Key Topics:
- Agentic Reasoning
- Foundational Capabilities
- Self-evolving Reasoning
- Multi-agent Systems
- In-context vs Post-training Reasoning
- Agentic Memory
- Tool Use and Planning
- Embodied Agents
- Scientific Discovery Agents