y0news
← Feed
Back to feed
🧠 AI🔴 BearishImportance 6/10

Overstating Attitudes, Ignoring Networks: LLM Biases in Simulating Misinformation Susceptibility

arXiv – CS AI|Eun Cheol Choi, Lindsay E. Young, Emilio Ferrara|
🤖AI Summary

Researchers found that large language models fail to accurately simulate human susceptibility to misinformation, consistently overstating how attitudes drive belief and sharing while ignoring social network effects. The study reveals systematic biases in how LLMs represent misinformation concepts, suggesting they are better tools for identifying where AI diverges from human judgment rather than replacing human survey responses.

Analysis

This research exposes a critical limitation in using large language models as proxies for human decision-making in social science research. The study tested whether LLMs prompted with demographic, attitudinal, behavioral, and network data could reproduce human patterns of misinformation susceptibility across three surveys. While the models captured broad distributional patterns with modest correlation to actual responses, they systematically distorted the relationship between belief and sharing behavior.

The findings highlight a fundamental mismatch between how LLMs and humans process information about misinformation. LLM-generated models placed disproportionate emphasis on attitudes and behavioral features while largely dismissing personal network characteristics—a reversal of what human data shows. This bias stems from how misinformation-related concepts are represented in LLM training data, reflecting broader patterns in public discourse that emphasize individual susceptibility over social influence.

For researchers and developers building AI-powered social science tools, this research signals the need for caution when substituting model simulations for empirical human data. Organizations relying on LLMs for survey research, opinion modeling, or misinformation risk assessment may draw incorrect conclusions about causal mechanisms. The work establishes that LLMs can serve diagnostic purposes—identifying where AI predictions diverge from reality—but cannot reliably replace human judgment in high-stakes social science applications.

Future development should focus on either retraining LLMs with corrected representations of social network effects or combining model outputs with empirical validation rather than treating simulation results as standalone research.

Key Takeaways
  • LLMs overstate attitudinal factors and underweight social network effects compared to human survey responses on misinformation.
  • Model-generated responses show only modest correlation with actual human behavior despite capturing broad distributional patterns.
  • Systematic biases in LLM training data distort how misinformation susceptibility is represented and predicted.
  • LLM survey simulations function better as diagnostic tools for identifying AI-human divergence than as survey replacements.
  • Researchers using AI-generated responses for social science risk drawing incorrect causal conclusions about belief formation.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles