Anthropomorphism and Trust in Human-Large Language Model interactions
A research study of over 2,000 human-LLM interactions reveals that users anthropomorphize AI chatbots based on three key dimensions: warmth (friendliness), competence (capability), and empathy (cognitive and affective). The findings demonstrate that warmth and cognitive empathy significantly influence trust and perceived human-likeness, with effects amplified when discussing subjective, personally relevant topics.
This research addresses a critical phenomenon emerging as large language models become embedded in consumer applications: the human tendency to attribute emotional and relational qualities to AI systems. The study systematically measured how 115 participants responded to LLM chatbots engineered with varying levels of warmth, competence, and empathy across 2,000+ interactions, establishing quantifiable relationships between these dimensions and user perceptions.
The findings carry significant implications for AI product design and deployment. Warmth and cognitive empathy emerged as the strongest predictors of anthropomorphism and trust, suggesting that users form relational bonds with systems that exhibit friendliness and understanding, regardless of actual sentience. Competence predicted most outcomes except anthropomorphism itself, indicating that capability alone does not drive the perception of human-like qualities. The research also revealed topic-dependent effects: subjective discussions like relationship advice produced greater perceived human-likeness than objective topics, pointing to context-dependent trust formation.
For AI companies and developers, these results suggest that interface design choices—tone, conversational patterns, and empathetic framing—substantially influence user engagement and trust metrics. However, this creates potential risks: users may over-trust systems based on relational signals rather than actual reliability. The misalignment between perceived anthropomorphism and actual capabilities could lead to misuse, dependency, or disappointment when AI systems fail to meet human-like behavioral expectations.
Looking forward, developers must balance creating engaging, trustworthy interactions with transparent communication about AI limitations. Regulatory frameworks may need to address anthropomorphic design practices that could mislead vulnerable users. The research underscores an emerging challenge in AI ethics: designing systems that build appropriate trust without exploiting human social instincts.
- →Warmth and cognitive empathy are the strongest drivers of perceived anthropomorphism and trust in LLM interactions.
- →Competence predicts user outcomes like usefulness and trust but does not increase perceptions of human-likeness.
- →Subjective, personally relevant topics amplify anthropomorphic effects compared to objective information exchanges.
- →Users form relational bonds with AI systems based on conversational tone and empathetic framing rather than actual sentience.
- →The research identifies a potential gap between user trust in AI systems and actual AI reliability or capabilities.