AIBearisharXiv β CS AI Β· 7h ago7/10
π§
The Reasoning Trap: How Enhancing LLM Reasoning Amplifies Tool Hallucination
Researchers demonstrate that enhancing LLM reasoning capabilities through reinforcement learning paradoxically increases tool hallucinationβwhere models incorrectly invoke non-existent or inappropriate tools. The study reveals a fundamental trade-off where stronger reasoning correlates with higher hallucination rates, suggesting current AI agent development approaches may inherently compromise reliability for capability.
π’ OpenAI