253 articles tagged with #benchmark. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv – CS AI · Mar 55/10
🧠Researchers introduce CodeTaste, a benchmark testing whether AI coding agents can perform code refactoring at human-level quality. The study reveals frontier AI models struggle to identify appropriate refactorings when given general improvement areas, but perform better with detailed specifications.
AINeutralarXiv – CS AI · Mar 55/10
🧠Researchers have introduced RealPref, a new benchmark for evaluating how well Large Language Models follow user preferences in long-term personalized interactions. The study reveals that LLM performance significantly degrades with longer contexts and more implicit preference expressions, highlighting challenges in developing user-aware AI assistants.
DeFiBullishThe Block · Mar 45/102
💎Strategy has increased STRC's monthly dividend to 11.5% as part of a mechanism to maintain the preferred shares' trading value near their $100 par value. This development positions STRC as a foundational element in the yield-backed stablecoin ecosystem according to Benchmark analysis.
$NEAR
AIBullisharXiv – CS AI · Mar 45/102
🧠Researchers introduce MultiSessionCollab, a benchmark for evaluating conversational AI agents' ability to learn and adapt to user preferences across multiple collaboration sessions. The study demonstrates that equipping agents with persistent memory significantly improves long-term collaboration quality, task success rates, and user experience.
AINeutralarXiv – CS AI · Mar 45/103
🧠Researchers introduced AttackSeqBench, a new benchmark designed to evaluate large language models' capabilities in understanding and reasoning about cyber attack sequences from threat intelligence reports. The study tested 7 LLMs, 5 LRMs, and 4 post-training strategies to assess their ability to analyze adversarial behaviors across tactical, technical, and procedural dimensions.
AINeutralarXiv – CS AI · Mar 45/104
🧠Researchers introduce HSSBench, a new benchmark designed to evaluate multimodal large language models (MLLMs) on Humanities and Social Sciences tasks across multiple languages. The benchmark contains over 13,000 samples and reveals significant challenges for current state-of-the-art models in cross-disciplinary reasoning.
AIBearisharXiv – CS AI · Mar 36/104
🧠Researchers introduced SimpleToM, a benchmark revealing that state-of-the-art language models can infer mental states but struggle to apply that knowledge for behavior prediction and judgment. The study exposes a critical gap between explicit Theory of Mind inference and implicit application in real-world scenarios.
AINeutralarXiv – CS AI · Mar 36/103
🧠Researchers introduced WebDevJudge, a benchmark for evaluating how well AI models can judge web development quality compared to human experts. The study reveals significant gaps between AI judges and human evaluation, highlighting fundamental limitations in AI's ability to assess complex, interactive web development tasks.
AINeutralarXiv – CS AI · Mar 36/104
🧠Researchers introduce Vision-DeepResearch Benchmark (VDR-Bench) with 2,000 VQA instances to better evaluate multimodal AI systems' visual and textual search capabilities. The benchmark addresses limitations in existing evaluations where answers could be inferred without proper visual search, and proposes a multi-round cropped-search workflow to improve model performance.
$NEAR
AI × CryptoBearisharXiv – CS AI · Mar 36/108
🤖TraderBench introduces a new benchmark for evaluating AI agents in financial markets, combining expert-verified static tasks with adversarial trading simulations. The study found that 8 of 13 tested AI models showed minimal variation across market conditions, indicating they rely on fixed strategies rather than adaptive market behavior.
AINeutralarXiv – CS AI · Mar 36/109
🧠Researchers introduce EmCoop, a new benchmark framework for studying cooperation among LLM-based embodied multi-agent systems in dynamic environments. The framework separates cognitive coordination from physical interaction layers and provides process-level metrics to analyze collaboration quality beyond just task completion success.
AIBullisharXiv – CS AI · Mar 36/108
🧠Researchers introduce MicroVerse, a specialized AI video generation model for microscale biological simulations, addressing limitations of current video generation models in scientific applications. The work includes MicroWorldBench benchmark and MicroSim-10K dataset, targeting biomedical applications like drug discovery and educational visualization.
AIBullisharXiv – CS AI · Mar 36/109
🧠Researchers introduce K²-Agent, a hierarchical AI framework for mobile device control that separates 'know-what' and 'know-how' knowledge to achieve 76.1% success rate on AndroidWorld benchmark. The system uses a high-level reasoner for task planning and low-level executor for skill execution, showing strong generalization across different models and tasks.
AIBearisharXiv – CS AI · Mar 37/108
🧠Researchers introduced the Synthetic Web Benchmark, revealing that frontier AI language models fail catastrophically when exposed to high-plausibility misinformation in search results. The study shows current AI agents struggle to handle conflicting information sources, with accuracy collapsing despite access to truthful content.
AINeutralarXiv – CS AI · Mar 36/107
🧠Researchers introduce MC-Search, the first benchmark for evaluating agentic multimodal retrieval-augmented generation (MM-RAG) systems with long, structured reasoning chains. The benchmark reveals systematic issues in current multimodal large language models and introduces Search-Align, a training framework that improves planning and retrieval accuracy.
AINeutralarXiv – CS AI · Mar 37/106
🧠Researchers introduce ProtRLSearch, a multi-round protein search agent that uses reinforcement learning and multimodal inputs (protein sequences and text) to improve protein analysis for healthcare applications. The system addresses limitations of single-round, text-only protein search agents and includes a new benchmark called ProtMCQs with 3,000 multiple choice questions for evaluation.
AINeutralarXiv – CS AI · Mar 36/107
🧠Researchers developed an event-based evaluation framework for LLM-generated clinical summaries of remote monitoring data, revealing that models with high semantic similarity often fail to capture clinically significant events. A vision-based approach using time-series visualizations achieved the best clinical event alignment with 45.7% abnormality recall.
$NEAR
AINeutralarXiv – CS AI · Mar 36/108
🧠Researchers introduce GMP, a new benchmark highlighting critical challenges in AI content moderation systems when dealing with co-occurring policy violations and dynamic platform rules. The study reveals that current large language models struggle with consistent moderation when policies are unstable or context-dependent, leading to either over-censorship or allowing harmful content.
AINeutralarXiv – CS AI · Mar 36/105
🧠Researchers introduce LiveCultureBench, a new benchmark that evaluates large language models as autonomous agents in simulated social environments, testing both task completion and adherence to cultural norms. The benchmark uses a multi-cultural town simulation to assess cross-cultural robustness and the balance between effectiveness and cultural sensitivity in LLM agents.
AIBullisharXiv – CS AI · Mar 36/107
🧠NovaLAD is a new CPU-optimized document extraction pipeline that uses dual YOLO models for converting unstructured documents into structured formats for AI applications. The system achieves 96.49% TEDS and 98.51% NID on benchmarks, outperforming existing commercial and open-source parsers while running efficiently on CPU without requiring GPU resources.
AIBearisharXiv – CS AI · Mar 36/106
🧠Researchers reveal that state-of-the-art Vision-Language-Action (VLA) models largely ignore language instructions despite achieving 95% success on standard benchmarks. The new LangGap benchmark exposes significant language understanding deficits, with targeted data augmentation only partially addressing the fundamental challenge of diverse instruction comprehension.
AINeutralarXiv – CS AI · Mar 36/104
🧠Researchers introduce EgoNight, the first comprehensive benchmark for nighttime egocentric vision understanding, featuring day-night aligned videos and visual question answering tasks. The benchmark reveals significant performance drops in state-of-the-art multimodal large language models when operating under low-light conditions.
AINeutralarXiv – CS AI · Mar 36/107
🧠Researchers introduce Theory of Code Space (ToCS), a new benchmark that evaluates AI agents' ability to understand software architecture across multi-file codebases. The study reveals significant performance gaps between frontier LLM agents and rule-based baselines, with F1 scores ranging from 0.129 to 0.646.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers developed an open-source modular benchmark for evaluating diffusion-based motion planners in real-world autonomous driving systems. The system integrates with Autoware ROS 2 stack and achieves 3.2x latency reduction through encoder caching while improving accuracy by 41% with second-order solving.
AIBullisharXiv – CS AI · Mar 36/107
🧠Researchers have developed Thoth, the first family of Large Language Models specifically designed to understand and reason about time series data through a mid-training approach. The model uses a specialized corpus called Book-of-Thoth to bridge the gap between temporal data and natural language, significantly outperforming existing LLMs in time series analysis tasks.