y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#benchmark-dataset News & Analysis

16 articles tagged with #benchmark-dataset. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

16 articles
AINeutralarXiv – CS AI Β· 3d ago7/10
🧠

Can Large Language Models Infer Causal Relationships from Real-World Text?

Researchers developed the first real-world benchmark for evaluating whether large language models can infer causal relationships from complex academic texts. The study reveals that LLMs struggle significantly with this task, with the best models achieving only 0.535 F1 scores, highlighting a critical gap in AI reasoning capabilities needed for AGI advancement.

AINeutralarXiv – CS AI Β· Feb 277/108
🧠

MM-NeuroOnco: A Multimodal Benchmark and Instruction Dataset for MRI-Based Brain Tumor Diagnosis

Researchers introduce MM-NeuroOnco, a large-scale multimodal dataset containing 24,726 MRI slices and 200,000 instructions for training AI models in brain tumor diagnosis. The benchmark reveals significant challenges in medical AI, with even advanced models like Gemini 3 Flash achieving only 41.88% accuracy on diagnostic questions.

AIBullisharXiv – CS AI Β· 3d ago6/10
🧠

MMR-AD: A Large-Scale Multimodal Dataset for Benchmarking General Anomaly Detection with Multimodal Large Language Models

Researchers introduced MMR-AD, a large-scale multimodal dataset designed to benchmark general anomaly detection using Multimodal Large Language Models (MLLMs). The study reveals that current state-of-the-art MLLMs fall short of industrial requirements for anomaly detection, though a proposed baseline model called Anomaly-R1 demonstrates significant improvements through reasoning-based approaches enhanced by reinforcement learning.

AINeutralarXiv – CS AI Β· 4d ago6/10
🧠

Noise-Aware In-Context Learning for Hallucination Mitigation in ALLMs

Researchers propose Noise-Aware In-Context Learning (NAICL), a plug-and-play method to reduce hallucinations in auditory large language models without expensive fine-tuning. The approach uses a noise prior library to guide models toward more conservative outputs, achieving a 37% reduction in hallucination rates while establishing a new benchmark for evaluating audio understanding systems.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

A-MBER: Affective Memory Benchmark for Emotion Recognition

Researchers introduce A-MBER, a benchmark dataset designed to evaluate AI assistants' ability to recognize emotions based on long-term interaction history rather than immediate context. The benchmark tests whether models can retrieve relevant past interactions, infer current emotional states, and provide grounded explanationsβ€”revealing that memory's value lies in selective, context-aware interpretation rather than simple historical volume.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

Attention Flows: Tracing LLM Conceptual Engagement via Story Summaries

Researchers evaluated whether large language models understand long-form narratives similarly to humans by comparing summaries of 150 novels written by humans and nine state-of-the-art LLMs. The study found that LLMs focus disproportionately on story endings rather than distributing attention like human readers, revealing gaps in narrative comprehension despite expanded context windows.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

Toward Memory-Aided World Models: Benchmarking via Spatial Consistency

Researchers introduced a new benchmark dataset for evaluating world models' ability to maintain spatial consistency across long sequences, addressing a critical gap in AI evaluation. The dataset, collected from Minecraft environments with 20 million frames across 150 locations, enables development of memory-augmented models that can reliably simulate physical spaces for downstream tasks like planning and simulation.

AINeutralarXiv – CS AI Β· Apr 106/10
🧠

Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

Q-Probe introduces a novel agentic framework for scaling image quality assessment to high-resolution images by addressing limitations in existing reinforcement learning approaches. The research presents Vista-Bench, a new benchmark for fine-grained degradation analysis, and demonstrates state-of-the-art performance across multiple resolution scales through context-aware probing mechanisms.

AINeutralarXiv – CS AI Β· Mar 166/10
🧠

Do LLMs have a Gender (Entropy) Bias?

Researchers discovered that large language models exhibit gender bias at the individual question level, creating different amounts of information for men versus women despite appearing unbiased at category levels. A new benchmark dataset called RealWorldQuestioning was developed, and a simple prompt-based debiasing approach was shown to improve response quality in 78% of cases.

🏒 Hugging Face🧠 ChatGPT
AIBullisharXiv – CS AI Β· Mar 36/106
🧠

TripleSumm: Adaptive Triple-Modality Fusion for Video Summarization

Researchers introduce TripleSumm, a novel AI architecture that adaptively fuses visual, text, and audio modalities for improved video summarization. The team also releases MoSu, the first large-scale benchmark dataset providing all three modalities for multimodal video summarization research.

AINeutralarXiv – CS AI Β· Feb 275/106
🧠

FIRE: A Comprehensive Benchmark for Financial Intelligence and Reasoning Evaluation

Researchers introduce FIRE, a comprehensive benchmark for evaluating Large Language Models' financial intelligence and reasoning capabilities. The benchmark includes theoretical financial knowledge tests from qualification exams and 3,000 practical financial scenario questions covering complex business domains.

AINeutralarXiv – CS AI Β· Feb 276/107
🧠

PoSh: Using Scene Graphs To Guide LLMs-as-a-Judge For Detailed Image Descriptions

Researchers introduce PoSh, a new evaluation metric for detailed image descriptions that uses scene graphs to guide LLMs-as-a-Judge, achieving better correlation with human judgments than existing methods. They also present DOCENT, a challenging benchmark dataset featuring artwork with expert-written descriptions to evaluate vision-language models' performance on complex image analysis.

AINeutralarXiv – CS AI Β· Mar 44/103
🧠

Whisper-RIR-Mega: A Paired Clean-Reverberant Speech Benchmark for ASR Robustness to Room Acoustics

Researchers introduce Whisper-RIR-Mega, a new benchmark dataset for testing automatic speech recognition robustness in reverberant acoustic environments. The study evaluates five Whisper models and finds that reverberation consistently degrades performance across all model sizes, with word error rates increasing by 0.12 to 1.07 percentage points.

AINeutralarXiv – CS AI Β· Mar 44/102
🧠

Real-Time Generation of Game Video Commentary with Multimodal LLMs: Pause-Aware Decoding Approaches

Researchers developed new prompting-based approaches using multimodal large language models to generate real-time video commentary that considers both content relevance and timing. The study introduces dynamic interval-based decoding that adjusts prediction timing based on utterance duration, showing improved alignment with human commentary patterns without requiring model fine-tuning.