y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#emotion-recognition News & Analysis

5 articles tagged with #emotion-recognition. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

5 articles
AINeutralarXiv – CS AI Β· Apr 146/10
🧠

HumanVBench: Probing Human-Centric Video Understanding in MLLMs with Automatically Synthesized Benchmarks

Researchers introduced HumanVBench, a comprehensive benchmark for evaluating how well multimodal AI models understand human-centric video content across 16 tasks including emotion recognition and speech-visual alignment. The study evaluated 30 leading MLLMs and found significant performance gaps, even among top proprietary models, while introducing automated synthesis pipelines to enable scalable benchmark creation with minimal human effort.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

A-MBER: Affective Memory Benchmark for Emotion Recognition

Researchers introduce A-MBER, a benchmark dataset designed to evaluate AI assistants' ability to recognize emotions based on long-term interaction history rather than immediate context. The benchmark tests whether models can retrieve relevant past interactions, infer current emotional states, and provide grounded explanationsβ€”revealing that memory's value lies in selective, context-aware interpretation rather than simple historical volume.

AINeutralarXiv – CS AI Β· Mar 124/10
🧠

AMB-DSGDN: Adaptive Modality-Balanced Dynamic Semantic Graph Differential Network for Multimodal Emotion Recognition

Researchers propose AMB-DSGDN, a new AI system for multimodal emotion recognition that uses adaptive modality balancing and differential graph attention mechanisms. The system addresses limitations in existing approaches by filtering noise and preventing dominant modalities from overwhelming the fusion process in text, speech, and visual data.

AINeutralarXiv – CS AI Β· Mar 114/10
🧠

VoxEmo: Benchmarking Speech Emotion Recognition with Speech LLMs

Researchers introduce VoxEmo, a comprehensive benchmark for evaluating Speech Large Language Models on emotion recognition tasks across 35 emotion corpora and 15 languages. The benchmark addresses evaluation challenges in open text generation and introduces novel protocols that better align with human subjective emotion perception.

AINeutralarXiv – CS AI Β· Mar 54/10
🧠

A benchmark for joint dialogue satisfaction, emotion recognition, and emotion state transition prediction

Researchers have created a new multi-task Chinese dialogue dataset that enables prediction of user satisfaction, emotion recognition, and emotional state transitions across multiple conversation turns. The dataset addresses limitations in existing Chinese resources and aims to improve understanding of how user emotions evolve during interactions to better predict satisfaction.