y0news
#evaluation6 articles
6 articles
AINeutralarXiv โ€“ CS AI ยท 4h ago7
๐Ÿง 

HumanMCP: A Human-Like Query Dataset for Evaluating MCP Tool Retrieval Performance

Researchers have released HumanMCP, the first large-scale dataset designed to evaluate tool retrieval performance in Model Context Protocol (MCP) servers. The dataset addresses a critical gap by providing realistic, human-like queries paired with 2,800 tools across 308 MCP servers, improving upon existing benchmarks that lack authentic user interaction patterns.

AINeutralarXiv โ€“ CS AI ยท 4h ago5
๐Ÿง 

LemmaBench: A Live, Research-Level Benchmark to Evaluate LLM Capabilities in Mathematics

Researchers have developed LemmaBench, a new benchmark for evaluating Large Language Models on research-level mathematics by automatically extracting and rewriting lemmas from arXiv papers. Current state-of-the-art LLMs achieve only 10-15% accuracy on these mathematical theorem proving tasks, revealing a significant gap between AI capabilities and human-level mathematical research.

AINeutralarXiv โ€“ CS AI ยท 4h ago4
๐Ÿง 

DARE-bench: Evaluating Modeling and Instruction Fidelity of LLMs in Data Science

Researchers introduce DARE-bench, a new benchmark with 6,300 Kaggle-derived tasks for evaluating Large Language Models' performance on data science and machine learning tasks. The benchmark reveals that even advanced models like GPT-4-mini struggle with ML modeling tasks, while fine-tuning on DARE-bench data can improve model accuracy by up to 8x.

AINeutralarXiv โ€“ CS AI ยท 4h ago4
๐Ÿง 

LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering

Researchers released LFQA-HP-1M, a dataset with 1.3 million human preference annotations for evaluating long-form question answering systems. The study introduces nine quality rubrics and shows that simple linear models can match advanced LLM evaluators while exposing vulnerabilities in current evaluation methods.

AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

How do Visual Attributes Influence Web Agents? A Comprehensive Evaluation of User Interface Design Factors

Researchers introduced VAF, a systematic evaluation pipeline to measure how visual web elements influence AI agent decision-making. The study tested 48 variants across 5 real-world websites and found that background contrast, item size, position, and card clarity significantly impact agent behavior, while font styling and text color have minimal effects.

AINeutralarXiv โ€“ CS AI ยท 4h ago0
๐Ÿง 

AudioCapBench: Quick Evaluation on Audio Captioning across Sound, Music, and Speech

Researchers introduce AudioCapBench, a new benchmark for evaluating how well large multimodal AI models can generate captions for audio content across sound, music, and speech domains. The study tested 13 models from OpenAI and Google Gemini, finding that Gemini models generally outperformed OpenAI in overall captioning quality, though all models struggled most with music captioning.