13 articles tagged with #model-assessment. AI-curated summaries with sentiment analysis and key takeaways from 50+ sources.
AINeutralarXiv โ CS AI ยท 2d ago7/10
๐ง Researchers propose a cognitive diagnostic framework that evaluates large language models across fine-grained ability dimensions rather than aggregate scores, enabling targeted model improvement and task-specific selection. The approach uses multidimensional Item Response Theory to estimate abilities across 35 dimensions for mathematics and generalizes to physics, chemistry, and computer science with strong predictive accuracy.
AINeutralarXiv โ CS AI ยท 3d ago7/10
๐ง Researchers introduce General365, a benchmark revealing that leading LLMs achieve only 62.8% accuracy on general reasoning tasks despite excelling in domain-specific domains. The findings highlight a critical gap: current AI models rely heavily on specialized knowledge rather than developing robust, transferable reasoning capabilities applicable to real-world scenarios.
AIBearisharXiv โ CS AI ยท Apr 107/10
๐ง Researchers reveal that Large Language Models exhibit self-preference bias when evaluating other LLMs, systematically favoring outputs from themselves or related models even when using objective rubric-based criteria. The bias can reach 50% on objective benchmarks and 10-point score differences on subjective medical benchmarks, potentially distorting model rankings and hindering AI development.
AINeutralarXiv โ CS AI ยท Mar 177/10
๐ง Researchers introduce AVA-Bench, a new benchmark that evaluates vision foundation models (VFMs) by testing 14 distinct atomic visual abilities like localization and depth estimation. This approach provides more precise assessment than traditional VQA benchmarks and reveals that smaller 0.5B language models can evaluate VFMs as effectively as 7B models while using 8x fewer GPU resources.
AINeutralarXiv โ CS AI ยท 2d ago6/10
๐ง Researchers propose Filtered Reasoning Score (FRS), a new evaluation metric that assesses the quality of reasoning in large language models beyond simple accuracy metrics. FRS focuses on the model's most confident reasoning traces, evaluating dimensions like faithfulness and coherence, revealing significant performance differences between models that appear identical under traditional accuracy benchmarks.
AIBullisharXiv โ CS AI ยท 4d ago6/10
๐ง Researchers introduce BERT-as-a-Judge, a lightweight alternative to LLM-based evaluation methods that assesses generative model outputs with greater accuracy than lexical approaches while requiring significantly less computational overhead. The method demonstrates that existing lexical evaluation techniques poorly correlate with human judgment across 36 models and 15 tasks, establishing a practical middle ground between rigid rule-based and expensive LLM-judge evaluation paradigms.
AIBullisharXiv โ CS AI ยท Apr 76/10
๐ง Researchers developed DualJudge, a new framework for evaluating large language models that combines structured Fuzzy Analytic Hierarchy Process (FAHP) with traditional direct scoring methods. The approach addresses inconsistent LLM evaluation by incorporating uncertainty-aware reasoning and achieved state-of-the-art performance on JudgeBench testing.
AINeutralarXiv โ CS AI ยท Mar 276/10
๐ง Researchers introduce RubricEval, the first rubric-level meta-evaluation benchmark for assessing how well AI judges evaluate instruction-following in large language models. Even advanced models like GPT-4o achieve only 55.97% accuracy on the challenging subset, highlighting significant gaps in AI evaluation reliability.
๐ง GPT-4
AINeutralarXiv โ CS AI ยท Mar 126/10
๐ง Researchers have developed the System Hallucination Scale (SHS), a human-centered tool for evaluating hallucination behavior in large language models. The instrument showed strong statistical validity in testing with 210 participants and provides a practical method for assessing AI model reliability from a user perspective.
AINeutralarXiv โ CS AI ยท Mar 36/109
๐ง Researchers propose a tensor factorization method that combines cheap automated evaluation data with limited human labels to enable fine-grained evaluation of AI generative models. The approach addresses the data bottleneck in model evaluation by using autorater scores to pretrain representations that are then aligned to human preferences with minimal calibration data.
AINeutralarXiv โ CS AI ยท Feb 276/104
๐ง Researchers propose using psychometric modeling to correct systematic biases in human evaluations of AI systems, demonstrating how Item Response Theory can separate true AI output quality from rater behavior inconsistencies. The approach was tested on OpenAI's summarization dataset and showed improved reliability in measuring AI model performance.
AINeutralHugging Face Blog ยท Apr 165/107
๐ง LiveCodeBench introduces a new leaderboard for evaluating code-focused Large Language Models (LLMs) with an emphasis on holistic assessment and contamination-free testing. The benchmark aims to provide more accurate and reliable evaluation of AI coding capabilities by addressing common issues in existing evaluation methods.
AINeutralHugging Face Blog ยท Feb 25/108
๐ง NPHardEval Leaderboard introduces a new evaluation framework for assessing large language models' reasoning capabilities through computational complexity classes with dynamic updates. The leaderboard aims to provide more rigorous testing of LLM reasoning abilities by incorporating problems from different complexity categories.