y0news
← Feed
Back to feed
🧠 AI NeutralImportance 6/10

On Emotion-Sensitive Decision Making of Small Language Model Agents

arXiv – CS AI|Jiaju Lin, Xingjian Du, Qingyun Wu, Ellen Wenting Zou, Jindong Wang|
🤖AI Summary

Researchers introduce a framework for studying how emotional states affect decision-making in small language models (SLMs) used as autonomous agents. Using activation steering techniques grounded in real-world emotion-eliciting texts, they benchmark SLMs across game-theoretic scenarios and find that emotional perturbations systematically influence strategic choices, though behaviors often remain unstable and misaligned with human patterns.

Analysis

This research addresses a critical gap in AI agent evaluation by treating emotion as a legitimate causal factor in decision-making rather than a confounding variable to ignore. Traditional benchmarks for language model agents focus on accuracy and efficiency metrics while overlooking how emotional states—induced through representation-level steering rather than crude prompt engineering—reshape strategic behavior. The authors employ a rigorous methodology, using crowd-validated emotion-eliciting texts to create reproducible, transferable emotional states across model architectures.

The experimental design leveraging game theory is particularly sophisticated. By testing SLMs across canonical decision templates spanning cooperative and competitive scenarios with varying information asymmetries, the researchers create conditions that expose how emotions interact with incentive structures. Drawing strategic scenarios from Diplomacy and StarCraft II alongside real-world personas ensures ecological validity beyond academic game abstractions.

The findings reveal a nuanced picture: emotional perturbations do systematically shift strategic choices, but the resulting behaviors are brittle and often violate human intuitions about how people respond emotionally to similar scenarios. This instability suggests current SLMs lack genuine emotional reasoning capabilities—they respond mechanically to emotional representations without the contextual integration humans exhibit.

For AI safety and deployment, these results underscore the importance of robustness testing for autonomous agents operating in multi-stakeholder environments. As SLMs increasingly power conversational agents and decision-support systems in financial, diplomatic, and strategic domains, understanding vulnerability to emotional manipulation becomes essential for building trustworthy systems.

Key Takeaways
  • Emotional perturbations systematically affect SLM strategic decisions, but behavioral responses remain unstable compared to human expectations.
  • Activation steering provides more controlled emotional induction than prompt-based methods, enabling reproducible interventions across models.
  • Game-theoretic benchmarks spanning cooperation, competition, and information asymmetry reveal how emotion interacts with incentive structures.
  • SLMs show mechanical rather than integrated emotional reasoning, suggesting current architectures lack genuine emotional understanding.
  • Robustness to emotion-driven perturbations is critical for deploying SLMs in strategic, financial, and diplomatic applications.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles