Position: Agent Should Invoke External Tools ONLY When Epistemically Necessary
Researchers propose that AI agents should invoke external tools only when epistemically necessary—when internal reasoning cannot reliably complete a task. The Theory of Agent framework treats tool use as a decision under uncertainty rather than a simple action optimization problem, arguing that unnecessary delegation wastes resources and prevents development of internal reasoning capabilities.
This position paper addresses a fundamental design question in AI agent architecture that has direct implications for how autonomous systems will operate in production environments. As language models increasingly function as decision-making agents with access to external tools, APIs, and databases, the distinction between justified and unjustified tool invocation becomes critical for both efficiency and capability development. The authors challenge the prevailing paradigm where agents treat tools as equivalent to any other action, instead proposing an epistemic necessity standard: agents should only delegate when internal reasoning demonstrably cannot achieve reliable task completion.
The Theory of Agent framework reframes common failure patterns—overthinking and overacting—as manifestations of miscalibrated uncertainty management rather than fundamental reasoning defects. This shift has important consequences for agent training and evaluation. When agents are incentivized purely on task success, they learn to over-delegate to external tools as a risk mitigation strategy, similar to how humans might outsource decisions under pressure. The paper argues this approach creates a capability ceiling: agents that always defer to external tools never develop robust internal reasoning, making them brittle and dependent on perfect tool availability.
For the AI development community, this work establishes a normative criterion that complements existing decision-theoretic models and addresses the practical economics of agent deployment. Systems that minimize unnecessary external calls reduce latency, computational cost, and failure points while developing more resilient internal capabilities. The framework suggests evaluation metrics should penalize gratuitous tool use, encouraging agents to build stronger internal models. This approach aligns with broader goals of creating increasingly intelligent systems rather than merely correct ones.
- →Agents should invoke external tools only when internal reasoning cannot reliably complete tasks without external interaction.
- →Unnecessary tool delegation impedes development of internal reasoning capabilities and creates brittle, dependent systems.
- →Common agent failures like overthinking and overacting stem from miscalibrated decisions under uncertainty, not reasoning deficits.
- →Training and evaluation metrics should penalize gratuitous tool use to encourage robust internal capability development.
- →The Theory of Agent framework provides a principled standard for tool use beyond simple task-success optimization.