y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#theory-of-mind News & Analysis

11 articles tagged with #theory-of-mind. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

11 articles
AIBullisharXiv – CS AI Β· 4d ago7/10
🧠

Bayesian Social Deduction with Graph-Informed Language Models

Researchers introduce a hybrid framework combining probabilistic models with large language models to improve social reasoning in AI agents, achieving a 67% win rate against human players in the game Avalonβ€”a breakthrough in AI's ability to infer beliefs and intentions from incomplete information.

AIBullisharXiv – CS AI Β· Apr 77/10
🧠

Readable Minds: Emergent Theory-of-Mind-Like Behavior in LLM Poker Agents

Research published on arXiv demonstrates that large language models playing poker can develop sophisticated Theory of Mind capabilities when equipped with persistent memory, progressing to advanced levels of opponent modeling and strategic deception. The study found memory is necessary and sufficient for this emergent behavior, while domain expertise enhances but doesn't gate ToM development.

🧠 GPT-4
AIBearisharXiv – CS AI Β· Feb 277/107
🧠

GPT-4o Lacks Core Features of Theory of Mind

New research reveals that GPT-4o and other large language models lack true Theory of Mind capabilities, despite appearing socially proficient. While LLMs can approximate human judgments in simple social tasks, they fail at logically equivalent challenges and show inconsistent mental state reasoning.

AIBullisharXiv – CS AI Β· 3d ago6/10
🧠

CoSToM:Causal-oriented Steering for Intrinsic Theory-of-Mind Alignment in Large Language Models

Researchers introduce CoSToM, a framework that uses causal tracing and activation steering to improve Theory of Mind alignment in large language models. The work addresses a critical gap between LLMs' internal knowledge and external behavior, demonstrating that targeted interventions in specific neural layers can enhance social reasoning capabilities and dialogue quality.

AINeutralarXiv – CS AI Β· 3d ago6/10
🧠

Playing Along: Learning a Double-Agent Defender for Belief Steering via Theory of Mind

Researchers introduce ToM-SB, a novel challenge where AI defenders must use theory-of-mind reasoning to deceive attackers trying to extract sensitive information. Through reinforcement learning, trained models outperform frontier LLMs like GPT-4 and Gemini-Pro, revealing an emergent bidirectional relationship between belief modeling and deception capabilities.

🧠 GPT-5
AINeutralarXiv – CS AI Β· Mar 176/10
🧠

Dynamic Theory of Mind as a Temporal Memory Problem: Evidence from Large Language Models

Research reveals that Large Language Models struggle with dynamic Theory of Mind tasks, particularly tracking how others' beliefs change over time. While LLMs can infer current beliefs effectively, they fail to maintain and retrieve prior belief states after updates occur, showing patterns consistent with human cognitive biases.

AIBullisharXiv – CS AI Β· Mar 37/107
🧠

MetaMind: General and Cognitive World Models in Multi-Agent Systems by Meta-Theory of Mind

Meta researchers introduced MetaMind, a cognitive world model for multi-agent systems that enables agents to understand and predict other agents' behaviors without centralized supervision or communication. The system uses a meta-theory of mind framework allowing agents to reason about goals and beliefs of others through self-reflective learning and analogical reasoning.

AIBullisharXiv – CS AI Β· Mar 36/109
🧠

The Observer-Situation Lattice: A Unified Formal Basis for Perspective-Aware Cognition

Researchers introduce the Observer-Situation Lattice (OSL), a unified mathematical framework for autonomous agents to reason about multiple perspectives in complex environments. The system addresses limitations in current AI approaches by providing a single coherent structure for belief management and Theory of Mind reasoning.

AINeutralarXiv – CS AI Β· Mar 37/109
🧠

Evaluating Theory of Mind and Internal Beliefs in LLM-Based Multi-Agent Systems

Researchers introduce a novel multi-agent AI architecture that integrates Theory of Mind, internal beliefs, and symbolic solvers to improve collaborative decision-making in LLM-based systems. The study evaluates this architecture across different language models in resource allocation scenarios, revealing complex interactions between LLM capabilities and cognitive mechanisms.

AINeutralarXiv – CS AI Β· Mar 36/104
🧠

To Think or Not To Think, That is The Question for Large Reasoning Models in Theory of Mind Tasks

A research study of nine advanced Large Language Models reveals that Large Reasoning Models (LRMs) do not consistently outperform non-reasoning models on Theory of Mind tasks, which assess social cognition abilities. The study found that longer reasoning often hurts performance and models rely on shortcuts rather than genuine deduction, suggesting formal reasoning advances don't transfer to social reasoning tasks.