y0news
← Feed
Back to feed
🧠 AI NeutralImportance 6/10

To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks

arXiv – CS AI|Nanxu Gong, Haotian Li, Sixun Dong, Jianxun Lian, Yanjie Fu, Xing Xie||4 views
🤖AI Summary

A research study of nine advanced Large Language Models reveals that Large Reasoning Models (LRMs) do not consistently outperform non-reasoning models on Theory of Mind tasks, which assess social cognition abilities. The study found that longer reasoning often hurts performance and models rely on shortcuts rather than genuine deduction, suggesting formal reasoning advances don't transfer to social reasoning tasks.

Key Takeaways
  • Large Reasoning Models do not consistently outperform non-reasoning models on Theory of Mind benchmarks and sometimes perform worse.
  • Accuracy drops significantly as responses grow longer, with larger reasoning budgets actually hurting performance in social cognition tasks.
  • Models show reliance on option matching shortcuts rather than genuine deductive reasoning when solving Theory of Mind problems.
  • Moderate and adaptive reasoning approaches can benefit performance when reasoning length is properly constrained.
  • Advances in formal reasoning capabilities for math and coding do not fully transfer to social reasoning tasks like Theory of Mind.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles