y0news
← Feed
Back to feed
🧠 AI NeutralImportance 4/10

Social Meaning in Large Language Models: Structure, Magnitude, and Pragmatic Prompting

arXiv – CS AI|Roland M\"uhlenbernd|
🤖AI Summary

Research reveals that large language models can reproduce the qualitative structure of human social reasoning but struggle with quantitative magnitude calibration. Pragmatic prompting strategies that consider speaker knowledge and motives can improve this calibration, though fine-grained accuracy remains partially unresolved.

Key Takeaways
  • LLMs reliably capture the qualitative patterns of human social inferences but vary significantly in magnitude calibration.
  • Two new metrics (ESR and CDS) were introduced to distinguish structural fidelity from magnitude calibration in AI systems.
  • Prompting models to consider speaker knowledge and motives most consistently reduces magnitude deviation in social reasoning.
  • Alternative-awareness prompting tends to amplify exaggeration rather than improve calibration.
  • Combining multiple pragmatic prompting components is the only intervention that improves all calibration metrics across models.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles