y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#behavioral-modeling News & Analysis

9 articles tagged with #behavioral-modeling. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

9 articles
AINeutralarXiv โ€“ CS AI ยท 1d ago6/10
๐Ÿง 

Artificial Intelligence for Modeling and Simulation of Mixed Automated and Human Traffic

A comprehensive survey examines AI methodologies for simulating mixed autonomous and human-driven traffic, addressing critical gaps in current simulation tools. The research proposes a unified taxonomy of AI methods spanning agent-level behavior models, environment-level simulations, and physics-informed approaches to improve autonomous vehicle testing and validation.

AIBullisharXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Teaching Language Models How to Code Like Learners: Conversational Serialization for Student Simulation

Researchers propose a method for training open-source language models to simulate how programming students learn and debug code, using authentic student data serialized into conversational formats. This approach addresses privacy and cost concerns with proprietary models while demonstrating improved performance in replicating student problem-solving behavior compared to existing baselines.

AINeutralarXiv โ€“ CS AI ยท 2d ago6/10
๐Ÿง 

Tuning Language Models for Robust Prediction of Diverse User Behaviors

Researchers introduce BehaviorLM, a progressive fine-tuning approach that enables large language models to predict both common and rare user behaviors more effectively. The method uses a two-stage process that balances learning frequent anchor behaviors with improving predictions for uncommon tail behaviors, demonstrating improved performance on real-world datasets.

AIBearisharXiv โ€“ CS AI ยท Mar 36/107
๐Ÿง 

Position: AI Agents Are Not (Yet) a Panacea for Social Simulation

Researchers argue that LLM-based AI agents are not yet effective for social simulation, despite growing optimism in the field. The paper identifies systematic mismatches between what current agent systems produce and what scientific simulation requires, calling for more rigorous validation frameworks.

$OP
AINeutralarXiv โ€“ CS AI ยท Feb 275/107
๐Ÿง 

Towards Simulating Social Media Users with LLMs: Evaluating the Operational Validity of Conditioned Comment Prediction

Researchers introduced Conditioned Comment Prediction (CCP) to evaluate how well Large Language Models can simulate social media user behavior by predicting user comments. The study found that supervised fine-tuning improves text structure but degrades semantic accuracy, and that behavioral histories are more effective than descriptive personas for user simulation.

AIBullisharXiv โ€“ CS AI ยท Mar 115/10
๐Ÿง 

Improving through Interaction: Searching Behavioral Representation Spaces with CMA-ES-IG

Researchers developed CMA-ES-IG, a new algorithm that helps robots learn user preferences more effectively by incorporating user experience considerations. The algorithm suggests perceptually distinct and informative robot behaviors for users to rank, showing improved scalability, computational efficiency, and user satisfaction compared to existing methods.

AINeutralarXiv โ€“ CS AI ยท Mar 44/103
๐Ÿง 

Revealing Positive and Negative Role Models to Help People Make Good Decisions

Researchers present a framework for social planners to strategically reveal positive and negative role models to influence agent behavior in social networks. The study addresses optimization challenges when disclosure budgets are limited and proposes algorithms to maximize social welfare while maintaining fairness across different groups.

AINeutralarXiv โ€“ CS AI ยท Mar 25/107
๐Ÿง 

Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges

A research position paper examines the integration of Large Language Models (LLMs) in agent-based social simulations, highlighting both opportunities and limitations. The study proposes Hybrid Constitutional Architectures that combine classical agent-based models with small language models and LLMs to balance expressive flexibility with analytical transparency.