y0news
AnalyticsDigestsSourcesTopicsRSSAICrypto

#benchmark News & Analysis

253 articles tagged with #benchmark. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.

253 articles
AINeutralarXiv – CS AI · Mar 55/10
🧠

CodeTaste: Can LLMs Generate Human-Level Code Refactorings?

Researchers introduce CodeTaste, a benchmark testing whether AI coding agents can perform code refactoring at human-level quality. The study reveals frontier AI models struggle to identify appropriate refactorings when given general improvement areas, but perform better with detailed specifications.

AINeutralarXiv – CS AI · Mar 55/10
🧠

Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions

Researchers have introduced RealPref, a new benchmark for evaluating how well Large Language Models follow user preferences in long-term personalized interactions. The study reveals that LLM performance significantly degrades with longer contexts and more implicit preference expressions, highlighting challenges in developing user-aware AI assistants.

AIBullisharXiv – CS AI · Mar 45/102
🧠

MultiSessionCollab: Learning User Preferences with Memory to Improve Long-Term Collaboration

Researchers introduce MultiSessionCollab, a benchmark for evaluating conversational AI agents' ability to learn and adapt to user preferences across multiple collaboration sessions. The study demonstrates that equipping agents with persistent memory significantly improves long-term collaboration quality, task success rates, and user experience.

AINeutralarXiv – CS AI · Mar 45/103
🧠

AttackSeqBench: Benchmarking the Capabilities of LLMs for Attack Sequences Understanding

Researchers introduced AttackSeqBench, a new benchmark designed to evaluate large language models' capabilities in understanding and reasoning about cyber attack sequences from threat intelligence reports. The study tested 7 LLMs, 5 LRMs, and 4 post-training strategies to assess their ability to analyze adversarial behaviors across tactical, technical, and procedural dimensions.

AINeutralarXiv – CS AI · Mar 36/103
🧠

WebDevJudge: Evaluating (M)LLMs as Critiques for Web Development Quality

Researchers introduced WebDevJudge, a benchmark for evaluating how well AI models can judge web development quality compared to human experts. The study reveals significant gaps between AI judges and human evaluation, highlighting fundamental limitations in AI's ability to assess complex, interactive web development tasks.

AINeutralarXiv – CS AI · Mar 36/104
🧠

Vision-DeepResearch Benchmark: Rethinking Visual and Textual Search for Multimodal Large Language Models

Researchers introduce Vision-DeepResearch Benchmark (VDR-Bench) with 2,000 VQA instances to better evaluate multimodal AI systems' visual and textual search capabilities. The benchmark addresses limitations in existing evaluations where answers could be inferred without proper visual search, and proposes a multi-round cropped-search workflow to improve model performance.

$NEAR
AI × CryptoBearisharXiv – CS AI · Mar 36/108
🤖

TraderBench: How Robust Are AI Agents in Adversarial Capital Markets?

TraderBench introduces a new benchmark for evaluating AI agents in financial markets, combining expert-verified static tasks with adversarial trading simulations. The study found that 8 of 13 tested AI models showed minimal variation across market conditions, indicating they rely on fixed strategies rather than adaptive market behavior.

AINeutralarXiv – CS AI · Mar 36/109
🧠

EmCoop: A Framework and Benchmark for Embodied Cooperation Among LLM Agents

Researchers introduce EmCoop, a new benchmark framework for studying cooperation among LLM-based embodied multi-agent systems in dynamic environments. The framework separates cognitive coordination from physical interaction layers and provides process-level metrics to analyze collaboration quality beyond just task completion success.

AIBullisharXiv – CS AI · Mar 36/108
🧠

MicroVerse: A Preliminary Exploration Toward a Micro-World Simulation

Researchers introduce MicroVerse, a specialized AI video generation model for microscale biological simulations, addressing limitations of current video generation models in scientific applications. The work includes MicroWorldBench benchmark and MicroSim-10K dataset, targeting biomedical applications like drug discovery and educational visualization.

AIBullisharXiv – CS AI · Mar 36/109
🧠

K^2-Agent: Co-Evolving Know-What and Know-How for Hierarchical Mobile Device Control

Researchers introduce K²-Agent, a hierarchical AI framework for mobile device control that separates 'know-what' and 'know-how' knowledge to achieve 76.1% success rate on AndroidWorld benchmark. The system uses a high-level reasoner for task planning and low-level executor for skill execution, showing strong generalization across different models and tasks.

AINeutralarXiv – CS AI · Mar 36/107
🧠

MC-Search: Evaluating and Enhancing Multimodal Agentic Search with Structured Long Reasoning Chains

Researchers introduce MC-Search, the first benchmark for evaluating agentic multimodal retrieval-augmented generation (MM-RAG) systems with long, structured reasoning chains. The benchmark reveals systematic issues in current multimodal large language models and introduces Search-Align, a training framework that improves planning and retrieval accuracy.

AINeutralarXiv – CS AI · Mar 37/106
🧠

ProtRLSearch: A Multi-Round Multimodal Protein Search Agent with Large Language Models Trained via Reinforcement Learning

Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.

AINeutralarXiv – CS AI · Mar 36/107
🧠

Benchmarking LLM Summaries of Multimodal Clinical Time Series for Remote Monitoring

Researchers developed an event-based evaluation framework for LLM-generated clinical summaries of remote monitoring data, revealing that models with high semantic similarity often fail to capture clinically significant events. A vision-based approach using time-series visualizations achieved the best clinical event alignment with 45.7% abnormality recall.

$NEAR
AINeutralarXiv – CS AI · Mar 36/108
🧠

GMP: A Benchmark for Content Moderation under Co-occurring Violations and Dynamic Rules

Researchers introduce GMP, a new benchmark highlighting critical challenges in AI content moderation systems when dealing with co-occurring policy violations and dynamic platform rules. The study reveals that current large language models struggle with consistent moderation when policies are unstable or context-dependent, leading to either over-censorship or allowing harmful content.

AINeutralarXiv – CS AI · Mar 36/105
🧠

LiveCultureBench: a Multi-Agent, Multi-Cultural Benchmark for Large Language Models in Dynamic Social Simulations

Researchers introduce LiveCultureBench, a new benchmark that evaluates large language models as autonomous agents in simulated social environments, testing both task completion and adherence to cultural norms. The benchmark uses a multi-cultural town simulation to assess cross-cultural robustness and the balance between effectiveness and cultural sensitivity in LLM agents.

AIBullisharXiv – CS AI · Mar 36/107
🧠

NovaLAD: A Fast, CPU-Optimized Document Extraction Pipeline for Generative AI and Data Intelligence

NovaLAD is a new CPU-optimized document extraction pipeline that uses dual YOLO models for converting unstructured documents into structured formats for AI applications. The system achieves 96.49% TEDS and 98.51% NID on benchmarks, outperforming existing commercial and open-source parsers while running efficiently on CPU without requiring GPU resources.

AIBearisharXiv – CS AI · Mar 36/106
🧠

LangGap: Diagnosing and Closing the Language Gap in Vision-Language-Action Models

Researchers reveal that state-of-the-art Vision-Language-Action (VLA) models largely ignore language instructions despite achieving 95% success on standard benchmarks. The new LangGap benchmark exposes significant language understanding deficits, with targeted data augmentation only partially addressing the fundamental challenge of diverse instruction comprehension.

AINeutralarXiv – CS AI · Mar 36/104
🧠

EgoNight: Towards Egocentric Vision Understanding at Night with a Challenging Benchmark

Researchers introduce EgoNight, the first comprehensive benchmark for nighttime egocentric vision understanding, featuring day-night aligned videos and visual question answering tasks. The benchmark reveals significant performance drops in state-of-the-art multimodal large language models when operating under low-light conditions.

AINeutralarXiv – CS AI · Mar 36/107
🧠

Theory of Code Space: Do Code Agents Understand Software Architecture?

Researchers introduce Theory of Code Space (ToCS), a new benchmark that evaluates AI agents' ability to understand software architecture across multi-file codebases. The study reveals significant performance gaps between frontier LLM agents and rule-based baselines, with F1 scores ranging from 0.129 to 0.646.

AIBullisharXiv – CS AI · Mar 36/107
🧠

Thoth: Mid-Training Bridges LLMs to Time Series Understanding

Researchers have developed Thoth, the first family of Large Language Models specifically designed to understand and reason about time series data through a mid-training approach. The model uses a specialized corpus called Book-of-Thoth to bridge the gap between temporal data and natural language, significantly outperforming existing LLMs in time series analysis tasks.

← PrevPage 7 of 11Next →