y0news
← Feed
Back to feed
🧠 AI🟢 Bullish

HumanLM: Simulating Users with State Alignment Beats Response Imitation

arXiv – CS AI|Shirley Wu, Evelyn Choi, Arpandeep Khatua, Zhanghan Wang, Joy He-Yueya, Tharindu Cyril Weerasooriya, Wei Wei, Diyi Yang, Jure Leskovec, James Zou|
🤖AI Summary

Researchers introduce HumanLM, a novel AI training framework that creates user simulators by aligning psychological states rather than just imitating response patterns. The system achieved 16.3% improvement in alignment scores across six datasets with 26k users and 216k responses, demonstrating superior ability to simulate real human behavior.

Key Takeaways
  • HumanLM uses reinforcement learning to align natural-language latent states with ground-truth responses, capturing underlying user psychology.
  • The framework outperformed existing approaches with 16.3% relative improvement in alignment scores across comprehensive benchmarks.
  • Humanual benchmark includes six datasets spanning 26k users and 216k responses across diverse contexts from daily life to political discussions.
  • Real-time study with 111 participants confirmed HumanLM achieves highest similarity to actual user responses.
  • The approach moves beyond surface-level imitation to capture deeper psychological states like beliefs and emotions.
Read Original →via arXiv – CS AI
Act on this with AI
Stay ahead of the market.
Connect your wallet to an AI agent. It reads balances, proposes swaps and bridges across 15 chains — you keep full control of your keys.
Connect Wallet to AI →How it works
Related Articles