y0news
#benchmark16 articles
16 articles
AINeutralarXiv – CS AI Ā· 6h ago8
🧠

HumanMCP: A Human-Like Query Dataset for Evaluating MCP Tool Retrieval Performance

Researchers have released HumanMCP, the first large-scale dataset designed to evaluate tool retrieval performance in Model Context Protocol (MCP) servers. The dataset addresses a critical gap by providing realistic, human-like queries paired with 2,800 tools across 308 MCP servers, improving upon existing benchmarks that lack authentic user interaction patterns.

AIBullisharXiv – CS AI Ā· 6h ago5
🧠

MMKG-RDS: Reasoning Data Synthesis via Deep Mining of Multimodal Knowledge Graphs

Researchers introduce MMKG-RDS, a framework that uses multimodal knowledge graphs to synthesize high-quality training data for improving AI model reasoning abilities. Testing on Qwen3 models showed 9.2% improvement in reasoning accuracy, with applications for complex benchmark construction involving tables and formulas.

AIBullisharXiv – CS AI Ā· 6h ago8
🧠

Reasoning-Driven Multimodal LLM for Domain Generalization

Researchers developed RD-MLDG, a new framework that uses multimodal large language models with reasoning chains to improve domain generalization in deep learning. The approach addresses challenges in cross-domain visual recognition by leveraging reasoning capabilities rather than just visual feature invariance, achieving state-of-the-art performance on standard benchmarks.

AINeutralarXiv – CS AI Ā· 6h ago6
🧠

LemmaBench: A Live, Research-Level Benchmark to Evaluate LLM Capabilities in Mathematics

Researchers have developed LemmaBench, a new benchmark for evaluating Large Language Models on research-level mathematics by automatically extracting and rewriting lemmas from arXiv papers. Current state-of-the-art LLMs achieve only 10-15% accuracy on these mathematical theorem proving tasks, revealing a significant gap between AI capabilities and human-level mathematical research.

AINeutralarXiv – CS AI Ā· 6h ago5
🧠

DARE-bench: Evaluating Modeling and Instruction Fidelity of LLMs in Data Science

Researchers introduce DARE-bench, a new benchmark with 6,300 Kaggle-derived tasks for evaluating Large Language Models' performance on data science and machine learning tasks. The benchmark reveals that even advanced models like GPT-4-mini struggle with ML modeling tasks, while fine-tuning on DARE-bench data can improve model accuracy by up to 8x.

AIBullisharXiv – CS AI Ā· 6h ago10
🧠

Reason to Contrast: A Cascaded Multimodal Retrieval Framework

Researchers introduce TTE-v2, a new multimodal retrieval framework that achieves state-of-the-art performance by incorporating reasoning steps during retrieval and reranking. The approach demonstrates that scaling based on reasoning tokens rather than model size can significantly improve performance, with TTE-v2-7B reaching 75.7% accuracy on MMEB-V2 benchmark.

AINeutralarXiv – CS AI Ā· 6h ago5
🧠

When Does Multimodal Learning Help in Healthcare? A Benchmark on EHR and Chest X-Ray Fusion

Researchers conducted a systematic benchmark study on multimodal fusion between Electronic Health Records (EHR) and chest X-rays for clinical decision support, revealing when and how combining data modalities improves healthcare AI performance. The study found that multimodal fusion helps when data is complete but benefits degrade under realistic missing data scenarios, and released an open-source benchmarking toolkit for reproducible evaluation.

AINeutralarXiv – CS AI Ā· 6h ago5
🧠

Ref-Adv: Exploring MLLM Visual Reasoning in Referring Expression Tasks

Researchers introduce Ref-Adv, a new benchmark for testing multimodal large language models' visual reasoning capabilities in referring expression tasks. The benchmark reveals that current MLLMs, despite performing well on standard datasets like RefCOCO, rely heavily on shortcuts and show significant gaps in genuine visual reasoning and grounding abilities.

AINeutralarXiv – CS AI Ā· 6h ago5
🧠

Jailbreak Foundry: From Papers to Runnable Attacks for Reproducible Benchmarking

Researchers introduce Jailbreak Foundry (JBF), a system that automatically converts AI jailbreak research papers into executable code modules for standardized testing. The system successfully reproduced 30 attacks with high accuracy and reduces implementation code by nearly half while enabling consistent evaluation across multiple AI models.

AIBearisharXiv – CS AI Ā· 6h ago4
🧠

ForesightSafety Bench: A Frontier Risk Evaluation and Governance Framework towards Safe AI

Researchers have developed ForesightSafety Bench, a comprehensive AI safety evaluation framework covering 94 risk dimensions across 7 fundamental safety pillars. The benchmark evaluation of over 20 advanced large language models revealed widespread safety vulnerabilities, particularly in autonomous AI agents, AI4Science, and catastrophic risk scenarios.

AIBullisharXiv – CS AI Ā· 6h ago4
🧠

Less is More: AMBER-AFNO -- a New Benchmark for Lightweight 3D Medical Image Segmentation

Researchers developed AMBER-AFNO, a new lightweight architecture for 3D medical image segmentation that replaces traditional attention mechanisms with Adaptive Fourier Neural Operators. The model achieves state-of-the-art results on medical datasets while maintaining linear memory scaling and quasi-linear computational complexity.

$NEAR
AIBullisharXiv – CS AI Ā· 6h ago6
🧠

Scaling Generalist Data-Analytic Agents

Researchers introduce DataMind, a new training framework for building open-source data-analytic AI agents that can handle complex, multi-step data analysis tasks. The DataMind-14B model achieves state-of-the-art performance with 71.16% average score, outperforming proprietary models like DeepSeek-V3.1 and GPT-5 on data analysis benchmarks.

AINeutralarXiv – CS AI Ā· 6h ago1
🧠

HotelQuEST: Balancing Quality and Efficiency in Agentic Search

Researchers introduce HotelQuEST, a new benchmark for evaluating agentic search systems that balances quality and efficiency metrics. The study reveals that while LLM-based agents achieve higher accuracy than traditional retrievers, they incur substantially higher costs due to redundant operations and poor optimization.

AINeutralarXiv – CS AI Ā· 6h ago1
🧠

NuBench: An Open Benchmark for Deep Learning-Based Event Reconstruction in Neutrino Telescopes

NuBench is a new open benchmark for deep learning-based event reconstruction in neutrino telescopes, comprising seven large-scale simulated datasets with nearly 130 million neutrino interactions. The benchmark enables comparison of machine learning reconstruction methods across different detector geometries and evaluates four algorithms including ParticleNeT and DynEdge on core reconstruction tasks.

AINeutralarXiv – CS AI Ā· 6h ago1
🧠

AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech

Researchers introduce AudioCapBench, a new benchmark for evaluating how well large multimodal AI models can generate captions for audio content across sound, music, and speech domains. The study tested 13 models from OpenAI and Google Gemini, finding that Gemini models generally outperformed OpenAI in overall captioning quality, though all models struggled most with music captioning.

AINeutralarXiv – CS AI Ā· 6h ago1
🧠

CSyMR: Benchmarking Compositional Music Information Retrieval in Symbolic Music Reasoning

Researchers introduce CSyMR-Bench, a new benchmark for evaluating AI systems' ability to perform complex music information retrieval tasks from symbolic notation. The benchmark includes 126 multiple-choice questions requiring compositional reasoning, and demonstrates that tool-augmented AI approaches outperform language model-only methods by 5-7%.